Perl FAQ
"Since Perl 5.6.1 the special variables @- and @+ can functionally replace $`, $& and $'. These arrays contain pointers to the beginning and end of each match (see perlvar for the full story), so they give you essentially the same information, but without the risk of excessive string copying."
Regex-Related Special Variables
Perl has a host of special variables that get filled after every m// or s/// regex match. $1, $2, $3, etc. hold the backreferences. $+ holds the last (highest-numbered) backreference. $& (dollar ampersand) holds the entire regex match.
@- is an array of match-start indices into the string. $-[0] holds the start of the entire regex match, $-[1] the start of the first backreference, etc. Likewise, @+ holds match-end indices (ends, not lengths).
$' (dollar followed by an apostrophe or single quote) holds the part of the string after (to the right of) the regex match. $` (dollar backtick) holds the part of the string before (to the left of) the regex match. Using these variables is not recommended in scripts when performance matters, as it causes Perl to slow down all regex matches in your entire script.
All these variables are read-only, and persist until the next regex match is attempted. They are dynamically scoped, as if they had an implicit 'local' at the start of the enclosing scope. Thus if you do a regex match, and call a sub that does a regex match, when that sub returns, your variables are still set as they were for the first match.
if ($lineCopy =~ /$joinedColumns/g) {
my $start = @+[0]; # match start index stored in position 0 in the array
print "MATCH: Found '$&'. lineCopy= " . $lineCopy . "\n";
print "MATCH: atminux = @- atplus= @+\n";
# print "MATCH: Next attempt at character " . pos($lineCopy) + 1 . "\n";
}
else {
print "NO MATCH: line = $lineCopy joinedColumns = $joinedColumns\n";
}
MATCH: Found 'attachments,grinder attachments'. lineCopy= tools,attachments,grinder attachments
MATCH: atminux = 6 atplus= 37
NO MATCH: line = tools,attachments,hammer \& hammer drill attachments joinedColumns = attachments,hammer\ \&\ hammer\ drill\ attachments
MATCH: Found 'attachments,jig saw attachments'. lineCopy= tools,attachments,jig saw attachments
MATCH: atminux = 6 atplus= 37
MATCH: Found 'attachments,metal case'. lineCopy= tools,attachments,metal case
MATCH: atminux = 6 atplus= 28
MATCH: Found 'attachments,miter saw attachments'. lineCopy= tools,attachments,miter saw attachments
MATCH: atminux = 6 atplus= 39
MATCH: Found 'attachments,nibbler attachments'. lineCopy= tools,attachments,nibbler attachments
MATCH: atminux = 6 atplus= 37
Friday, April 23, 2010
Friday, April 16, 2010
TRAC installation including trac HTML form based authentication
trac-admin /home/trac/yo_web_services initenv
chown -R apache.apache /home/svn/yo_web_services
chown -R apache.apache /home/trac/yo_web_services
vim /etc/httpd/conf.d/trac.conf
>>
SetHandler mod_python
PythonHandler trac.web.modpython_frontend
PythonOption TracEnv /home/trac/yo_web_services
PythonOption TracUriRoot /trac/yo_web_services
AuthType Basic
AuthName "trac"
AuthUserFile /home/trac/trac.htpasswd
# comment the next line if using HTML form based login using the trac plugins
# per the trac-hacks page
# Require valid-user
<< touch /home/trac/yo_web_services.htpasswd #Add users to password file htpasswd -m /home/trac/yo_web_services.htpasswd
trac-admin /home/trac/yo_web_services permission add TRAC_ADMIN
service httpd restart
Add the plugins from this page
http://trac-hacks.org/wiki/AccountManagerPlugin
chown -R apache.apache /home/svn/yo_web_services
chown -R apache.apache /home/trac/yo_web_services
vim /etc/httpd/conf.d/trac.conf
>>
SetHandler mod_python
PythonHandler trac.web.modpython_frontend
PythonOption TracEnv /home/trac/yo_web_services
PythonOption TracUriRoot /trac/yo_web_services
AuthType Basic
AuthName "trac"
AuthUserFile /home/trac/trac.htpasswd
# comment the next line if using HTML form based login using the trac plugins
# per the trac-hacks page
# Require valid-user
<< touch /home/trac/yo_web_services.htpasswd #Add users to password file htpasswd -m /home/trac/yo_web_services.htpasswd
trac-admin /home/trac/yo_web_services permission add
service httpd restart
Add the plugins from this page
http://trac-hacks.org/wiki/AccountManagerPlugin
Thursday, April 15, 2010
Thursday, March 18, 2010
Wednesday, March 17, 2010
Monday, March 8, 2010
Fedora 12 Cloudera Hadoop setup + Java JDK
Cloudera's Hadoop distribution
When installing Cloudera's Hadoop distribution on Fedora 12 make sure you install
the Sun Java SDK using the method recommended below.
Sun Java
Fedora Java installation
# yum install hadoop
Loaded plugins: presto, refresh-packagekit
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package hadoop.noarch 0:0.18.3-14.cloudera.CH0_3 set to be updated
--> Processing Dependency: jdk >= 1.6 for package: hadoop-0.18.3-14.cloudera.CH0_3.noarch
--> Finished Dependency Resolution
hadoop-0.18.3-14.cloudera.CH0_3.noarch from cloudera-stable has depsolving problems
--> Missing Dependency: jdk >= 1.6 is needed by package hadoop-0.18.3-14.cloudera.CH0_3.noarch (cloudera-stable)
Error: Missing Dependency: jdk >= 1.6 is needed by package hadoop-0.18.3-14.cloudera.CH0_3.noarch (cloudera-stable)
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest
Cloudera RPM Java installation to avoid the yum install dep problem
When installing Cloudera's Hadoop distribution on Fedora 12 make sure you install
the Sun Java SDK using the method recommended below.
Sun Java
Fedora Java installation
# yum install hadoop
Loaded plugins: presto, refresh-packagekit
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package hadoop.noarch 0:0.18.3-14.cloudera.CH0_3 set to be updated
--> Processing Dependency: jdk >= 1.6 for package: hadoop-0.18.3-14.cloudera.CH0_3.noarch
--> Finished Dependency Resolution
hadoop-0.18.3-14.cloudera.CH0_3.noarch from cloudera-stable has depsolving problems
--> Missing Dependency: jdk >= 1.6 is needed by package hadoop-0.18.3-14.cloudera.CH0_3.noarch (cloudera-stable)
Error: Missing Dependency: jdk >= 1.6 is needed by package hadoop-0.18.3-14.cloudera.CH0_3.noarch (cloudera-stable)
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest
Cloudera RPM Java installation to avoid the yum install dep problem
Thursday, January 21, 2010
how to speed up Heritrix
I figured out why the Heritrix crawler was running at one page per second.
It was configured it to run using a default Java VM size of 256m.
cat /etc/init.d/heritrix.sh
#!/bin/bash
/opt/heritrix/bin/heritrix --bind=yowb3 --admin=admin:admin
I changed this to 2048m and it seems to be running 10x faster
cat /etc/init.d/heritrix.sh
#!/bin/bash
export JAVA_OPTS=" -Xmx2048m"
/opt/heritrix/bin/heritrix --bind=yowb3 --admin=admin:admin
-----------------
Rates
9.55 URIs/sec (16.1 avg)
246 KB/sec (389 avg)
Load
6 active of 50 threads
1 congestion ratio
It was configured it to run using a default Java VM size of 256m.
cat /etc/init.d/heritrix.sh
#!/bin/bash
/opt/heritrix/bin/heritrix --bind=yowb3 --admin=admin:admin
I changed this to 2048m and it seems to be running 10x faster
cat /etc/init.d/heritrix.sh
#!/bin/bash
export JAVA_OPTS=" -Xmx2048m"
/opt/heritrix/bin/heritrix --bind=yowb3 --admin=admin:admin
-----------------
Rates
9.55 URIs/sec (16.1 avg)
246 KB/sec (389 avg)
Load
6 active of 50 threads
1 congestion ratio
Thursday, January 7, 2010
Lucene index writes per minute slow down
Sunday, January 3, 2010
Drupal/LAMP installation on Ubuntu
Install XAMPP (LAMP) and DRUPAL on Ubuntu
Old notes below:
1. Install LAMP
XAMPP install made easy-use the instructions on this site to install the LAMP stack
2. Install DRUPAL
Reset mysql password if necessary.
http://en.kioskea.net/faq/sujet-630-reinitializing-the-root-password-of-mysql
Install DRUPAL on Ubuntu
Alternate installation instructions with notes on security and important files
Old notes below:
1. Install LAMP
XAMPP install made easy-use the instructions on this site to install the LAMP stack
2. Install DRUPAL
Reset mysql password if necessary.
http://en.kioskea.net/faq/sujet-630-reinitializing-the-root-password-of-mysql
Install DRUPAL on Ubuntu
Alternate installation instructions with notes on security and important files
Friday, October 30, 2009
Boost smart_pointers need a new object not a pointer
typedef std::string TString;
typedef boost::smart_ptr RefString;
// this does not work->nasty memory leak
RefString rsUnitName = RefString(new std::string((*i)->pType->unitType));
RefString rsInstanceName = RefString(new std::string((*i)->unitObjectName));
refMapUnitNameUnitInstName.get()->operator [](rsUnitName) = rsInstanceName;
refMapUnitNameUnitInstName->insert(std::make_pair(rsUnitName, rsInstanceName));
refMapUnitNameUnitInstName->insert(std::pair(rsUnitName, rsInstanceName));
// this also causes a memory leak.
// pair can't figure out the size of the objects pointed to by the RefString for some reason
// and assigns a default of 8-bits
refMapUnitNameUnitInstName->insert(std::pairpType->unitType)),
RefString(new std::string((*i)->unitObjectName)))
);
// THIS WORKS!
// New the objects inside of the smart pointer wrapper that passed the types to std:pair
refMapUnitNameUnitInstName->insert(std::pairpType->unitType)),
RefString(new std::string((*i)->unitObjectName)))
);
typedef boost::smart_ptr
// this does not work->nasty memory leak
RefString rsUnitName = RefString(new std::string((*i)->pType->unitType));
RefString rsInstanceName = RefString(new std::string((*i)->unitObjectName));
refMapUnitNameUnitInstName.get()->operator [](rsUnitName) = rsInstanceName;
refMapUnitNameUnitInstName->insert(std::make_pair(rsUnitName, rsInstanceName));
refMapUnitNameUnitInstName->insert(std::pair
// this also causes a memory leak.
// pair can't figure out the size of the objects pointed to by the RefString for some reason
// and assigns a default of 8-bits
refMapUnitNameUnitInstName->insert(std::pair
RefString(new std::string((*i)->unitObjectName)))
);
// THIS WORKS!
// New the objects inside of the smart pointer wrapper that passed the types to std:pair
refMapUnitNameUnitInstName->insert(std::pair
RefString(new std::string((*i)->unitObjectName)))
);
Monday, August 17, 2009
Monday, May 18, 2009
Mac OS X Leopard and QT 4.5-hello.cpp
Install QT Creator on your machine.
Put this code in a file called hello.cpp
#include
#include
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QLabel *label = new QLabel("Hello Qt!");
label->show();
return app.exec();
}
run these commands to make the project and Makefiles respectively
/usr/bin/qmake-4.5 -project
/usr/bin/qmake-4.5
$ ls
Info.plist Makefile hello.app hello.cpp hello.o hello.pro hello.xcodeproj
Open the directory in the finder and clock on the xcodeproj file.
This will invoke QT Creator and open the project file.
Select
Build->BuildALL
Build->Run
Put this code in a file called hello.cpp
#include
#include
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QLabel *label = new QLabel("Hello Qt!");
label->show();
return app.exec();
}
run these commands to make the project and Makefiles respectively
/usr/bin/qmake-4.5 -project
/usr/bin/qmake-4.5
$ ls
Info.plist Makefile hello.app hello.cpp hello.o hello.pro hello.xcodeproj
Open the directory in the finder and clock on the xcodeproj file.
This will invoke QT Creator and open the project file.
Select
Build->BuildALL
Build->Run
Wednesday, March 11, 2009
SPE Python IDE installation on MAC OS X 10.5
Use Mac ports to install SPE. But first deal with the Mac Ports dependency issues.
sudo port install spe
...
Error: Target org.macports.activate returned: Image
error: /opt/local/include/X11/extensions/render.h is being used by the active render port. Please deactivate this port first, or use the -f flag to force the activation.
Warning: the following items did not execute (for xorg-renderproto): org.macports.activate
Error: The following dependencies failed to build: libsdl xorg-libXrandr xorg-renderproto
Error: Status 1 encountered during processing.
The render port was removed a while ago, and for quite a while before
that it installed no files because it has been replaced by
xorg-renderproto. Anyway, render is no longer needed, so to get rid of
it just run `sudo port -f uninstall render`.
sudo port install spe
download the Mac dmg
http://developer.berlios.de/forum/forum.php?forum_id=13452
http://blog.jeffstieler.com/wp-content/uploads/2009/03/spe.dmg
sudo port install spe
...
Error: Target org.macports.activate returned: Image
error: /opt/local/include/X11/extensions/render.h is being used by the active render port. Please deactivate this port first, or use the -f flag to force the activation.
Warning: the following items did not execute (for xorg-renderproto): org.macports.activate
Error: The following dependencies failed to build: libsdl xorg-libXrandr xorg-renderproto
Error: Status 1 encountered during processing.
The render port was removed a while ago, and for quite a while before
that it installed no files because it has been replaced by
xorg-renderproto. Anyway, render is no longer needed, so to get rid of
it just run `sudo port -f uninstall render`.
sudo port install spe
download the Mac dmg
http://developer.berlios.de/forum/forum.php?forum_id=13452
http://blog.jeffstieler.com/wp-content/uploads/2009/03/spe.dmg
Saturday, March 7, 2009
New Dutch 5 Euro coin designed with open source SW
This is worth reading. Open source multi media software on Ubuntu has arrived.
New Dutch 5 Euro coin designed with open source SW
New Dutch 5 Euro coin designed with open source SW
Sunday, February 22, 2009
Internet Archive ARC files
The Heritrix crawler writes its results into arc files. Here are some references on the web to
arc file reader information.
GOC project.
ARCReaderFactory
A researcher's notes on arc files and how to parse them.
arc file reader information.
GOC project.
ARCReaderFactory
A researcher's notes on arc files and how to parse them.
Saturday, February 21, 2009
Using a command file to run gdb
A command file can be used to run gdb. Why would you want to use a command line to run gdb?
Well, lets say you run a regression on 25,000 test cases and there are 1,300 tests which fail. 200
of the failures are caused by asserts. Now you want to know how many unique bugs are causing
the 200 failures so that you can determine the amount of work it will take the programming team
to fix the bugs. Well, you can write a post processing script to find all asserts, run gdb on the
testcases with backtrace, run a Perl script on the gdb output and sort the results to find the unique
set of bugs. The 200 asserts might be caused by 15 bugs in 4 different components. Now you know
the scope of the problem.
Below is a command file t.
File: t
run ../../../csl_new_bug/memory_map_invalid/2_regs_mmap.csl
backtrace
We cat the file t and pipe it to gdb which executes the commands and writes the stderr/stdout to log
$ cat t | gdb cslc &> log
GNU gdb 6.3.50-20050815 (Apple version gdb-962) (Sat Jul 26 08:14:40 UTC 2008)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-apple-darwin"...Reading symbols for shared libraries ........ done
(gdb) Starting program: /opt/he_fpl_svn/fpl_repo/cslc/trunk/build/linux/x86_32/cslc/cslc ../../../csl_new_bug/memory_map_
invalid/2_regs_mmap.csl
Reading symbols for shared libraries +++++++....................... done
DEBUG:You are running CSLC without RLM; you'd better be offline...
Fastpath Logic™ License Agreement
Please read and accept the license below to continue.
IMPORTANT - read carefully before DOWNLOADING, ACCESSING or USING the
Chip Specification Language Compiler(cslc) (collectivelly, "SOFTWARE"
of Fastpath Logic, INC.
Your use of the SOFTWARE is expressly conditioned upon and subject to
your agreement to these terms and conditions. If you do not agree to
these terms and conditions, do not DOWNLOAD, ACCESS or USE the SOFTWARE
(refer to license.txt documents for more information).
Assertion failed: (px != 0), function operator->, file /opt/local/include/boost/shared_ptr.hpp, line 315.
Program received signal SIGABRT, Aborted.
0x91c74e42 in __kill ()
(gdb) #0 0x91c74e42 in __kill ()
#1 0x91c74e34 in kill$UNIX2003 ()
#2 0x91ce723a in raise ()
#3 0x91cf3679 in abort ()
#4 0x91ce83db in __assert_rtn ()
#5 0x00bf5bc0 in boost::shared_ptr::operator-> (this=0xbfffb818) at shared_ptr.hpp:315
#6 0x009ea36a in NSCSLOm::CSLOmMemoryMapPage::add (this=0x1cd8230, addrObj=@0xbfffbb30, name=@0xbfffbb28, baseAddress=@0
xbfffbb10) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_MemoryMap.cpp:487
#7 0x009b3d31 in NSCSLOm::CSLOmCmdAdd::execute (lineNumber=30, fileName=@0xbfffbbe8, parent=@0xbfffbbe0, scope=@0xbfffbb
d8, params=@0xbfffbbd0) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_cmd1.cpp:3123
#8 0x009b6b9a in NSCSLOm::CSLOmCmdAdd::build (lineNumber=30, fileName=@0xbfffc830, parent=@0xbfffc828, scope=@0xbfffc820
, params=@0xbfffc818) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_cmd1.cpp:2913
#9 0x00a82180 in NSCSLOm::CSLOmSetCommand::build (lineNumber=30, fileName=@0xbfffcd14, parent=@0xbfffcd0c, scope=@0xbfff
cd04, keyword=NSCSLOm::TYPE_CMD_ADD, params=@0xbfffccfc) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_cmd.cpp:4
53
#10 0x008d3da8 in CslTreeWalker::command_add (this=0x1ccb7b0, _t=@0xbfffcd90) at csl.walker.g:2516
#11 0x008d8922 in CslTreeWalker::csl_command (this=0x1ccb7b0, _t=@0xbfffcfc0) at CslTreeWalker.cpp:200
log 39%
Well, lets say you run a regression on 25,000 test cases and there are 1,300 tests which fail. 200
of the failures are caused by asserts. Now you want to know how many unique bugs are causing
the 200 failures so that you can determine the amount of work it will take the programming team
to fix the bugs. Well, you can write a post processing script to find all asserts, run gdb on the
testcases with backtrace, run a Perl script on the gdb output and sort the results to find the unique
set of bugs. The 200 asserts might be caused by 15 bugs in 4 different components. Now you know
the scope of the problem.
Below is a command file t.
File: t
run ../../../csl_new_bug/memory_map_invalid/2_regs_mmap.csl
backtrace
We cat the file t and pipe it to gdb which executes the commands and writes the stderr/stdout to log
$ cat t | gdb cslc &> log
GNU gdb 6.3.50-20050815 (Apple version gdb-962) (Sat Jul 26 08:14:40 UTC 2008)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-apple-darwin"...Reading symbols for shared libraries ........ done
(gdb) Starting program: /opt/he_fpl_svn/fpl_repo/cslc/trunk/build/linux/x86_32/cslc/cslc ../../../csl_new_bug/memory_map_
invalid/2_regs_mmap.csl
Reading symbols for shared libraries +++++++....................... done
DEBUG:You are running CSLC without RLM; you'd better be offline...
Fastpath Logic™ License Agreement
Please read and accept the license below to continue.
IMPORTANT - read carefully before DOWNLOADING, ACCESSING or USING the
Chip Specification Language Compiler(cslc) (collectivelly, "SOFTWARE"
of Fastpath Logic, INC.
Your use of the SOFTWARE is expressly conditioned upon and subject to
your agreement to these terms and conditions. If you do not agree to
these terms and conditions, do not DOWNLOAD, ACCESS or USE the SOFTWARE
(refer to license.txt documents for more information).
Assertion failed: (px != 0), function operator->, file /opt/local/include/boost/shared_ptr.hpp, line 315.
Program received signal SIGABRT, Aborted.
0x91c74e42 in __kill ()
(gdb) #0 0x91c74e42 in __kill ()
#1 0x91c74e34 in kill$UNIX2003 ()
#2 0x91ce723a in raise ()
#3 0x91cf3679 in abort ()
#4 0x91ce83db in __assert_rtn ()
#5 0x00bf5bc0 in boost::shared_ptr
#6 0x009ea36a in NSCSLOm::CSLOmMemoryMapPage::add (this=0x1cd8230, addrObj=@0xbfffbb30, name=@0xbfffbb28, baseAddress=@0
xbfffbb10) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_MemoryMap.cpp:487
#7 0x009b3d31 in NSCSLOm::CSLOmCmdAdd::execute (lineNumber=30, fileName=@0xbfffbbe8, parent=@0xbfffbbe0, scope=@0xbfffbb
d8, params=@0xbfffbbd0) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_cmd1.cpp:3123
#8 0x009b6b9a in NSCSLOm::CSLOmCmdAdd::build (lineNumber=30, fileName=@0xbfffc830, parent=@0xbfffc828, scope=@0xbfffc820
, params=@0xbfffc818) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_cmd1.cpp:2913
#9 0x00a82180 in NSCSLOm::CSLOmSetCommand::build (lineNumber=30, fileName=@0xbfffcd14, parent=@0xbfffcd0c, scope=@0xbfff
cd04, keyword=NSCSLOm::TYPE_CMD_ADD, params=@0xbfffccfc) at /opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslom/CSLOM_cmd.cpp:4
53
#10 0x008d3da8 in CslTreeWalker::command_add (this=0x1ccb7b0, _t=@0xbfffcd90) at csl.walker.g:2516
#11 0x008d8922 in CslTreeWalker::csl_command (this=0x1ccb7b0, _t=@0xbfffcfc0) at CslTreeWalker.cpp:200
log 39%
Friday, February 6, 2009
Saturday, January 17, 2009
Use std:atexit to kill C++ singletons
Killing singletons is not really explained completely on any one blog that I found.
There is a single template listed at another blog (SEE BELOW) that has a DestroyInstance
function. That is called in a std:;atexit function. This seems to do the trick. One note:
make sure that the singleton is not called in destructor code.
/*!
\file singleton.h
\brief Implementation of the CSingleton template class.
\author Brian van der Beek
Introduction
There are times, when you need to have a class which can be instantiated once only. The Singleton
Design Pattern provides a solution for such a situation.
There are several possible ways to implement a singleton pattern, but it all pretty much comes down
to a class that has a private constructor and a static member function to create and retrieve an
instance of the class. My implementation does not differ much from this scenario, with the exception
that I created a singleton template class.
So, why a template class?
Well I have searched the Internet for an elegant implementation of a singleton class but I did not
really find a solution to my satisfaction. Most classes I found consist of a Singleton base class that
you can use to derive your own singleton class from. The problem with most of these classes is the fact
that you still have to override the GetInstance function to return a pointer to your derived class.
A template base class does not have this limitation as I can return any type of pointer.
How it works
To prevent outside source from creating (or copying) an instance of our singleton class, we need to
shield the constructor and copy constructor of the singleton class. Further we need to provide a
method to create and retrieve a reference to the singleton object:
static T* Instance() {
if (m_instance == NULL) {
m_instance = new T;
}
ASSERT(m_instance != NULL);
return m_instance;
};
When this method is called for the first time, it creates an instance of the singleton class,
any sequential calls will return a reference to the created class instance. To get a reference
to the singleton object, all we have to do is call this method as following:
CMySingleton* mySingleton = CMySingleton::Instance();
That is almost all that there is to it. Next to shielding the constructors, I also shielded
the destructor, so the singleton class cannot be deleted by accident. Just call the DestroyInstance()
method to destroy the singleton object. However, be careful when to call this method, because after
you have called this method, all your class data will be destroyed and a sequential to the Instance()
method will create a new instance.
So how do you create a class derived from the singleton template class? Again there is nothing to it.
Just include the attached header file and create your object as following:
class CMySingleton : public CSingleton {
friend CSingleton;
private:
CMySingleton();
~CMySingleton();
...
}
Conclusion
This implementation of the Singleton Pattern makes creating your own singleton class incredibly easy.
But you do have to be careful when to destroy the singleton class instance. If you find this to be a
problem you could consider adding (automatic) reference counting.
This article has no explicit license attached to it but may contain usage terms in the article text
or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
*/
#ifndef __SINGLETON_H__
#define __SINGLETON_H__
#include
//! The CSingleton class is a template class for creating singleton objects.
/*!
When the static Instance() method is called for the first time, the singleton
object is created. Every sequential call returns a reference to this instance.
The class instance can be destroyed by calling the DestroyInstance() method.
*/
template
class CSingleton {
public:
//! Gets a reference to the instance of the singleton class.
/*!
\return A reference to the instance of the singleton class.
If there is no instance of the class yet, one will be created.
*/
static T* Instance() {
if (m_instance == NULL) {
m_instance = new T;
}
// ASSERT(m_instance != NULL);
return m_instance;
};
//! Destroys the singleton class instance.
/*!
Be aware that all references to the single class instance will be
invalid after this method has been executed!
*/
static void DestroyInstance() {
delete m_instance;
m_instance = NULL;
};
protected:
// shield the constructor and destructor to prevent outside sources
// from creating or destroying a CSingleton instance.
//! Default constructor.
CSingleton(){};
//! Destructor.
virtual ~CSingleton(){};
private:
//! Copy constructor.
CSingleton(const CSingleton& source){};
static T* m_instance; //!< singleton class instance
};
//! static class member initialisation.
template T* CSingleton::m_instance = NULL;
#endif // ! defined __SINGLETON_H__
//////////////////////////////////////////////////////
class CLiTool : public CSingleton {
friend class CSingleton;
...
}
//////////////////////////////////////////////////////
void killSingletons() {
NSCLi::CLiTool::DestroyInstance(); // call the singleton template functions
NSCLi::CLiCommon::DestroyInstance();
NSCslc::CslcTool::DestroyInstance();
}
...
int main(int argc, char** argv) {
// Kill the singletons on exit
std::atexit(&killSingletons);
...
}
There is a single template listed at another blog (SEE BELOW) that has a DestroyInstance
function. That is called in a std:;atexit function. This seems to do the trick. One note:
make sure that the singleton is not called in destructor code.
/*!
\file singleton.h
\brief Implementation of the CSingleton template class.
\author Brian van der Beek
Introduction
There are times, when you need to have a class which can be instantiated once only. The Singleton
Design Pattern provides a solution for such a situation.
There are several possible ways to implement a singleton pattern, but it all pretty much comes down
to a class that has a private constructor and a static member function to create and retrieve an
instance of the class. My implementation does not differ much from this scenario, with the exception
that I created a singleton template class.
So, why a template class?
Well I have searched the Internet for an elegant implementation of a singleton class but I did not
really find a solution to my satisfaction. Most classes I found consist of a Singleton base class that
you can use to derive your own singleton class from. The problem with most of these classes is the fact
that you still have to override the GetInstance function to return a pointer to your derived class.
A template base class does not have this limitation as I can return any type of pointer.
How it works
To prevent outside source from creating (or copying) an instance of our singleton class, we need to
shield the constructor and copy constructor of the singleton class. Further we need to provide a
method to create and retrieve a reference to the singleton object:
static T* Instance() {
if (m_instance == NULL) {
m_instance = new T;
}
ASSERT(m_instance != NULL);
return m_instance;
};
When this method is called for the first time, it creates an instance of the singleton class,
any sequential calls will return a reference to the created class instance. To get a reference
to the singleton object, all we have to do is call this method as following:
CMySingleton* mySingleton = CMySingleton::Instance();
That is almost all that there is to it. Next to shielding the constructors, I also shielded
the destructor, so the singleton class cannot be deleted by accident. Just call the DestroyInstance()
method to destroy the singleton object. However, be careful when to call this method, because after
you have called this method, all your class data will be destroyed and a sequential to the Instance()
method will create a new instance.
So how do you create a class derived from the singleton template class? Again there is nothing to it.
Just include the attached header file and create your object as following:
class CMySingleton : public CSingleton
friend CSingleton
private:
CMySingleton();
~CMySingleton();
...
}
Conclusion
This implementation of the Singleton Pattern makes creating your own singleton class incredibly easy.
But you do have to be careful when to destroy the singleton class instance. If you find this to be a
problem you could consider adding (automatic) reference counting.
This article has no explicit license attached to it but may contain usage terms in the article text
or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
*/
#ifndef __SINGLETON_H__
#define __SINGLETON_H__
#include
//! The CSingleton class is a template class for creating singleton objects.
/*!
When the static Instance() method is called for the first time, the singleton
object is created. Every sequential call returns a reference to this instance.
The class instance can be destroyed by calling the DestroyInstance() method.
*/
template
class CSingleton {
public:
//! Gets a reference to the instance of the singleton class.
/*!
\return A reference to the instance of the singleton class.
If there is no instance of the class yet, one will be created.
*/
static T* Instance() {
if (m_instance == NULL) {
m_instance = new T;
}
// ASSERT(m_instance != NULL);
return m_instance;
};
//! Destroys the singleton class instance.
/*!
Be aware that all references to the single class instance will be
invalid after this method has been executed!
*/
static void DestroyInstance() {
delete m_instance;
m_instance = NULL;
};
protected:
// shield the constructor and destructor to prevent outside sources
// from creating or destroying a CSingleton instance.
//! Default constructor.
CSingleton(){};
//! Destructor.
virtual ~CSingleton(){};
private:
//! Copy constructor.
CSingleton(const CSingleton& source){};
static T* m_instance; //!< singleton class instance
};
//! static class member initialisation.
template
#endif // ! defined __SINGLETON_H__
//////////////////////////////////////////////////////
class CLiTool : public CSingleton
friend class CSingleton
...
}
//////////////////////////////////////////////////////
void killSingletons() {
NSCLi::CLiTool::DestroyInstance(); // call the singleton template functions
NSCLi::CLiCommon::DestroyInstance();
NSCslc::CslcTool::DestroyInstance();
}
...
int main(int argc, char** argv) {
// Kill the singletons on exit
std::atexit(&killSingletons);
...
}
Tuesday, January 6, 2009
Use singletons instead of static member vars and functions
A developer's code used macros, defines, and statics to initialize the constant variables used in the top level of the program.
This worked on Linux. However, since the ISO/ANSI C++ standard does not guarantee the initialization sequence
of the constructors there is a 50/50 chance that a compiler on a new platform will use a different init sequence resulting in
segfaults/problems. This is exactly what happened on the Mac. So I converted his macros/statics/defines to singleton
classes and things seem to be working better on the Mac. I had to change about 3k lines of code to use the singleton refs instead of defines and statics.
This worked on Linux. However, since the ISO/ANSI C++ standard does not guarantee the initialization sequence
of the constructors there is a 50/50 chance that a compiler on a new platform will use a different init sequence resulting in
segfaults/problems. This is exactly what happened on the Mac. So I converted his macros/statics/defines to singleton
classes and things seem to be working better on the Mac. I had to change about 3k lines of code to use the singleton refs instead of defines and statics.
Monday, January 5, 2009
nm -C
The porting job to the Mac continues. Murphy's law is in full effect.
It turns out the you need to run nm on the lib files with a -C arg to
disassemble the .a file and see the namespace that the symbol belongs
to.
I used -C and immediately saw that i had forgotten to prefix the
assignment of the static variables in the CLiTool class with the CLiTool
qualifier. Once I added the scope name
qualifier to the name things got "better".
It has been far too long since I "really" programmed in C++.
0000000000000020 B NSCLi::CLiTool::CURRENT_DIR
0000000000000010 B NSCLi::CLiTool::DIR_DELIMITER
0000000000000008 B NSCLi::CLiTool::INVALID_CHARS
00000000000001ab R NSCLi::CLiTool::CDIR_DELIMITER
0000000000000030 B NSCLi::CLiTool::END_ENV_VAR_NAME
0000000000000028 B NSCLi::CLiTool::BEGIN_ENV_VAR_NAME
0000000000000018 B NSCLi::CLiTool::BACK_DIR
0000000000000020 T NSCLi::CLiTool::CLiTool()
0000000000000000 T NSCLi::CLiTool::CLiTool()
00000000000000c0 b NSCLi::CSL_PP_
0000000000000120 b NSCLi::VER_PP_
0000000000000090 b NSCLi::VER_PLUS
0000000000000130 b NSCLi::VER_PRJ_
0000000000000198 b NSCLi::CDOM_AST_
U std::string::size() const
U std::string::operator[](unsigned long) const
U std::allocator::allocator()
U std::allocator::~allocator()
U std::basic_string,
std::allocator >::basic_string(char const*, std::allocator
const&)
U std::basic_string,
std::allocator >::~basic_string()
U std::ios_base::Init::Init()
U std::ios_base::Init::~Init()
0000000000000040 t std::__verify_grouping(char const*, unsigned long,
std::string const&)
0000000000000000 W unsigned long const& std::min(unsigned
long const&, unsigned long const&)
0000000000000038 b std::__ioinit
U __cxa_atexit
U __dso_handle
U __gcov_init
U __gcov_merge_add
U __gxx_personality_v0
0000000000000352 t __tcf_10
000000000000038e t __tcf_11
00000000000003ca t __tcf_12
It turns out the you need to run nm on the lib files with a -C arg to
disassemble the .a file and see the namespace that the symbol belongs
to.
I used -C and immediately saw that i had forgotten to prefix the
assignment of the static variables in the CLiTool class with the CLiTool
qualifier. Once I added the scope name
qualifier to the name things got "better".
It has been far too long since I "really" programmed in C++.
0000000000000020 B NSCLi::CLiTool::CURRENT_DIR
0000000000000010 B NSCLi::CLiTool::DIR_DELIMITER
0000000000000008 B NSCLi::CLiTool::INVALID_CHARS
00000000000001ab R NSCLi::CLiTool::CDIR_DELIMITER
0000000000000030 B NSCLi::CLiTool::END_ENV_VAR_NAME
0000000000000028 B NSCLi::CLiTool::BEGIN_ENV_VAR_NAME
0000000000000018 B NSCLi::CLiTool::BACK_DIR
0000000000000020 T NSCLi::CLiTool::CLiTool()
0000000000000000 T NSCLi::CLiTool::CLiTool()
00000000000000c0 b NSCLi::CSL_PP_
0000000000000120 b NSCLi::VER_PP_
0000000000000090 b NSCLi::VER_PLUS
0000000000000130 b NSCLi::VER_PRJ_
0000000000000198 b NSCLi::CDOM_AST_
U std::string::size() const
U std::string::operator[](unsigned long) const
U std::allocator
U std::allocator
U std::basic_string
std::allocator
const&)
U std::basic_string
std::allocator
U std::ios_base::Init::Init()
U std::ios_base::Init::~Init()
0000000000000040 t std::__verify_grouping(char const*, unsigned long,
std::string const&)
0000000000000000 W unsigned long const& std::min
long const&, unsigned long const&)
0000000000000038 b std::__ioinit
U __cxa_atexit
U __dso_handle
U __gcov_init
U __gcov_merge_add
U __gxx_personality_v0
0000000000000352 t __tcf_10
000000000000038e t __tcf_11
00000000000003ca t __tcf_12
Sunday, January 4, 2009
linker/C++ blogs with great info
http://blog.copton.net/articles/linker/index.html
http://www.airs.com/blog/archives/38
http://parashift.com/c++-faq-lite/
With info on static vars
http://parashift.com/c++-faq-lite/ctors.html
http://www.airs.com/blog/archives/38
http://parashift.com/c++-faq-lite/
With info on static vars
http://parashift.com/c++-faq-lite/ctors.html
Saturday, January 3, 2009
Don't use defines for string constants in C++part two-linker issues
Well, the saga continues. After replacing a set of constants and enums in one include file
with a C++ class containing the constants and enums we ran into problem number 2.
The constants from include file 1 are used in include file number 2. This results in linker
errors such as the one below. Note the lazy pointers-this is on a Mac.
compile:
[cc] Starting dependency analysis for 1 files.
[cc] 1 files are up to date.
[cc] 0 files to be recompiled from dependency analysis.
[cc] 1 total files to be compiled.
[cc] Starting link
[cc] ld warning: duplicate dylib /usr/lib/libz.1.dylib
[cc] Undefined symbols:
[cc] "NSCLi::CLiTool::CSL_PRINT_IT_FILENANME_", referenced from:
[cc] __ZN5NSCLi7CLiTool23CSL_PRINT_IT_FILENANME_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::INFO_", referenced from:
[cc] __ZN5NSCLi7CLiTool5INFO_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::CONFIG_FILE_", referenced from:
[cc] __ZN5NSCLi7CLiTool12CONFIG_FILE_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::VER_F_", referenced from:
[cc] __ZN5NSCLi7CLiTool6VER_F_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::VER_V_", referenced from:
[cc] __ZN5NSCLi7CLiTool6VER_V_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::VER_Y_", referenced from:
We need to add the new .o or .a files to the linker command in the ant file.
with a C++ class containing the constants and enums we ran into problem number 2.
The constants from include file 1 are used in include file number 2. This results in linker
errors such as the one below. Note the lazy pointers-this is on a Mac.
compile:
[cc] Starting dependency analysis for 1 files.
[cc] 1 files are up to date.
[cc] 0 files to be recompiled from dependency analysis.
[cc] 1 total files to be compiled.
[cc] Starting link
[cc] ld warning: duplicate dylib /usr/lib/libz.1.dylib
[cc] Undefined symbols:
[cc] "NSCLi::CLiTool::CSL_PRINT_IT_FILENANME_", referenced from:
[cc] __ZN5NSCLi7CLiTool23CSL_PRINT_IT_FILENANME_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::INFO_", referenced from:
[cc] __ZN5NSCLi7CLiTool5INFO_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::CONFIG_FILE_", referenced from:
[cc] __ZN5NSCLi7CLiTool12CONFIG_FILE_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::VER_F_", referenced from:
[cc] __ZN5NSCLi7CLiTool6VER_F_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::VER_V_", referenced from:
[cc] __ZN5NSCLi7CLiTool6VER_V_E$non_lazy_ptr in libSupport_Library.a(cslcCLI.o)
[cc] "NSCLi::CLiTool::VER_Y_", referenced from:
We need to add the new .o or .a files to the linker command in the ant file.
Don't use defines for string constants in C++
I wrote this one up because I could not find anything about this on the web. Someone wrote some code to
delete temporary files. But the files were not being deleted. And then we ported the code to another platform.
Well this problem turned out to be a destructor chain problem. It was not a memory leak per say or a logic
bug. Rather this was due to the ambiguity in the ISO C++ standard according to a C++ expert who has worked
on C++ debuggers.
The ISO C++ standard does not clearly spell out when the text segment is deallocated.
We had a bug which showed on Mac OS X and not on CENTOS Linux.
A destructor chain was calling a function in a object that used a global string variable.
The string variable was undefined at the time the destructor chain was executed. On
the Mac STL threw a strlen exception. Valgrind did not find the problem on Linux. Leaks
did not find the problem on the Mac. We replaced the defines with a C++ class containing
static strings (a singleton class) and the exception went away.
The way we found the problem was by running gdb and noticing that the destructor chain
contained a call to a non-destructor function. Then through a process of elimination we found
the offending var. The string had the correct value in Linux and the incorrect value in the Mac.
We then put a local var declaration and assignment in the function called by the destructor and
the bug went away.
(gdb) backtrace
#0 0x938e6b9e in __kill ()
#1 0x938e6b91 in kill$UNIX2003 ()
#2 0x9395dec2 in raise ()
#3 0x9396d47f in abort ()
#4 0x96e5e005 in __gnu_cxx::__verbose_terminate_handler ()
#5 0x96e5c10c in __gxx_personality_v0 ()
#6 0x96e5c14b in std::terminate ()
#7 0x96e5c261 in __cxa_throw ()
#8 0x96e1ccaa in std::__throw_length_error ()
#9 0x96e43f5a in std::string::_Rep::_S_create ()
#10 0x96e4429a in std::string::_Rep::_M_clone ()
#11 0x96e4540e in std::basic_string, std::allocator >::basic_string ()
#12 0x00c3cfbf in NSCLi::CLiCommon::getRoot (path=@0xbfffee2c) at cslcCLI_Support.cpp:363<<<<<<<<< #13 0x00c3d7ac in NSCLi::CLiCommon::deleteFile (fileName=@0x1cbc610) at
/opt/he_fpl_svn/fpl_repo/cslc/trunk/src/support/cli/cslcCLI_Support.cpp:227
#14 0x00002c11 in NSCslc::CSLcMain::deleteTempFile (this=0x1cbab00, fileName=@0x1cbab80) at new_cslc.cpp:199
#15 0x00002d8f in NSCslc::CSLcMain::~CSLcMain (this=0x1cbab00) at
/opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslc/new_cslc.cpp:49
#16 0x00021171 in boost::checked_delete (x=0x1cbab00) at checked_delete.hpp:34
#17 0x000265ea in boost::detail::sp_counted_impl_p::dispose (this=0x1cbb960) at
detail/sp_counted_impl.hpp:79
#18 0x0001a6e0 in boost::detail::sp_counted_base::release (this=0x1cbb960) at detail/sp_counted_base_gcc_x86.hpp:145
#19 0x0001a71c in boost::detail::shared_count::~shared_count (this=0x10578ac) at detail/shared_count.hpp:205
#20 0x0001bd18 in boost::shared_ptr::~shared_ptr (this=0x10578a8) at shared_ptr.hpp:131
#21 0x0001914d in __tcf_223 () at new_cslc.cpp:34
#22 0x938a0fdc in __cxa_finalize ()
#23 0x938a0ed0 in exit ()
#24 0x00001ec7 in start ()
(gdb)
After determining what the problem is we then changed the defines into a tool class. This required moving the defines into the class. We then prefixed the variables where they were used with the tool class scope qualifier.
delete temporary files. But the files were not being deleted. And then we ported the code to another platform.
Well this problem turned out to be a destructor chain problem. It was not a memory leak per say or a logic
bug. Rather this was due to the ambiguity in the ISO C++ standard according to a C++ expert who has worked
on C++ debuggers.
The ISO C++ standard does not clearly spell out when the text segment is deallocated.
We had a bug which showed on Mac OS X and not on CENTOS Linux.
A destructor chain was calling a function in a object that used a global string variable.
The string variable was undefined at the time the destructor chain was executed. On
the Mac STL threw a strlen exception. Valgrind did not find the problem on Linux. Leaks
did not find the problem on the Mac. We replaced the defines with a C++ class containing
static strings (a singleton class) and the exception went away.
The way we found the problem was by running gdb and noticing that the destructor chain
contained a call to a non-destructor function. Then through a process of elimination we found
the offending var. The string had the correct value in Linux and the incorrect value in the Mac.
We then put a local var declaration and assignment in the function called by the destructor and
the bug went away.
(gdb) backtrace
#0 0x938e6b9e in __kill ()
#1 0x938e6b91 in kill$UNIX2003 ()
#2 0x9395dec2 in raise ()
#3 0x9396d47f in abort ()
#4 0x96e5e005 in __gnu_cxx::__verbose_terminate_handler ()
#5 0x96e5c10c in __gxx_personality_v0 ()
#6 0x96e5c14b in std::terminate ()
#7 0x96e5c261 in __cxa_throw ()
#8 0x96e1ccaa in std::__throw_length_error ()
#9 0x96e43f5a in std::string::_Rep::_S_create ()
#10 0x96e4429a in std::string::_Rep::_M_clone ()
#11 0x96e4540e in std::basic_string
#12 0x00c3cfbf in NSCLi::CLiCommon::getRoot (path=@0xbfffee2c) at cslcCLI_Support.cpp:363<<<<<<<<<
/opt/he_fpl_svn/fpl_repo/cslc/trunk/src/support/cli/cslcCLI_Support.cpp:227
#14 0x00002c11 in NSCslc::CSLcMain::deleteTempFile (this=0x1cbab00, fileName=@0x1cbab80) at new_cslc.cpp:199
#15 0x00002d8f in NSCslc::CSLcMain::~CSLcMain (this=0x1cbab00) at
/opt/he_fpl_svn/fpl_repo/cslc/trunk/src/cslc/new_cslc.cpp:49
#16 0x00021171 in boost::checked_delete
#17 0x000265ea in boost::detail::sp_counted_impl_p
detail/sp_counted_impl.hpp:79
#18 0x0001a6e0 in boost::detail::sp_counted_base::release (this=0x1cbb960) at detail/sp_counted_base_gcc_x86.hpp:145
#19 0x0001a71c in boost::detail::shared_count::~shared_count (this=0x10578ac) at detail/shared_count.hpp:205
#20 0x0001bd18 in boost::shared_ptr
#21 0x0001914d in __tcf_223 () at new_cslc.cpp:34
#22 0x938a0fdc in __cxa_finalize ()
#23 0x938a0ed0 in exit ()
#24 0x00001ec7 in start ()
(gdb)
After determining what the problem is we then changed the defines into a tool class. This required moving the defines into the class. We then prefixed the variables where they were used with the tool class scope qualifier.
creating defines for C++ function pointers
Use defines to create function pointer typedefs
// *********************************************************************
// function pointers
// *********************************************************************
// the type name is THandler
typedef CLiTool::ECLiError (CLiArgumentList::*THandler)(const RefCLiArgumentBase&);
^^^^^^^
The type is THandler
Use it like this to avoid a complex function pointer as a param.
// *********************************************************************
// CLiArgumentEmpty class
// *********************************************************************
CLiArgumentEmpty::CLiArgumentEmpty(const RefCLiArgumentList& parent,
const RefTVec_RefString& keyWordList,
const THandler& handler,
TBool multiple)
The compiler can return unhelpful error messages related to syntax errors in defines containing function pointers
[cc] cslcCLI_Typedef.h:321: error: ISO C++ forbids declaration of ‘ECLiError’ with no type
[cc] cslcCLI_Typedef.h:321: error: typedef ‘NSCLi::ECLiError’ is initialized (use __typeof__ instead)
[cc] cslcCLI_Typedef.h:321: error: expected unqualified-id before ‘*’ token
[cc] cslcCLI_Typedef.h:321: error: ‘THandler’ was not declared in this scope
[cc] cslcCLI_Typedef.h:321: error: expected ‘,’ or ‘;’ before ‘(’ token
The above was caused by the missing scope qualifier before the return value in the define.
typedef ECLiError (CLiArgumentList::*THandler)(const RefCLiArgumentBase&);
instead of
typedef CLiTool::ECLiError (CLiArgumentList::*THandler)(const RefCLiArgumentBase&);
// *********************************************************************
// function pointers
// *********************************************************************
// the type name is THandler
typedef CLiTool::ECLiError (CLiArgumentList::*THandler)(const RefCLiArgumentBase&);
^^^^^^^
The type is THandler
Use it like this to avoid a complex function pointer as a param.
// *********************************************************************
// CLiArgumentEmpty class
// *********************************************************************
CLiArgumentEmpty::CLiArgumentEmpty(const RefCLiArgumentList& parent,
const RefTVec_RefString& keyWordList,
const THandler& handler,
TBool multiple)
The compiler can return unhelpful error messages related to syntax errors in defines containing function pointers
[cc] cslcCLI_Typedef.h:321: error: ISO C++ forbids declaration of ‘ECLiError’ with no type
[cc] cslcCLI_Typedef.h:321: error: typedef ‘NSCLi::ECLiError’ is initialized (use __typeof__ instead)
[cc] cslcCLI_Typedef.h:321: error: expected unqualified-id before ‘*’ token
[cc] cslcCLI_Typedef.h:321: error: ‘THandler’ was not declared in this scope
[cc] cslcCLI_Typedef.h:321: error: expected ‘,’ or ‘;’ before ‘(’ token
The above was caused by the missing scope qualifier before the return value in the define.
typedef ECLiError (CLiArgumentList::*THandler)(const RefCLiArgumentBase&);
instead of
typedef CLiTool::ECLiError (CLiArgumentList::*THandler)(const RefCLiArgumentBase&);
Tuesday, December 23, 2008
Installing JDEE on Mac OS X
Frequently, Macusers that want to make use of the JDEE Java Development Environment for Emacs run into a small problem.
Go to this site for instructions on how to tell JDEE where Java is located.
Go to this site for instructions on how to tell JDEE where Java is located.
Thursday, December 18, 2008
Friday, December 12, 2008
Nomachine remote desktop
removing hidden svn directories
Lets say you want to move a bunch of files out of one svn repo to another repo.
You need to remove the .svn directories from the directory hierarchy.
From a thread on another site:
Lets say ‘.svn’ directory is under a directory containing a space. For example:
# mkdir -p “a b/.svn”
Running your command will try to delete two entities called “./a” and “b/.svn”, and as you pass “-f” to “rm”, this will silently fail. To fix this use:
# find ./ -name ".svn" -print0 | Xargs -0 rm -Rf
This instructs “find” to seperate the output using a null byte (which will never occur in a filename) and also instructs xargs to expect the input to be in that format.
Thanks to the other site which got lost in the browser shuffle.
You need to remove the .svn directories from the directory hierarchy.
From a thread on another site:
Lets say ‘.svn’ directory is under a directory containing a space. For example:
# mkdir -p “a b/.svn”
Running your command will try to delete two entities called “./a” and “b/.svn”, and as you pass “-f” to “rm”, this will silently fail. To fix this use:
# find ./ -name ".svn" -print0 | Xargs -0 rm -Rf
This instructs “find” to seperate the output using a null byte (which will never occur in a filename) and also instructs xargs to expect the input to be in that format.
Thanks to the other site which got lost in the browser shuffle.
Monday, December 8, 2008
Installing Java 1.6 on a 32-bit Mac OS X Leopard machine
This kind person detailed the process for installing Java 1.6 on a 32-bit Mac OS X Leopard machine
installing-the-jdk-16-on-mac-os-x
Landon Fuller'sOpenBSD Soy Latte page.
installing-the-jdk-16-on-mac-os-x
Landon Fuller'sOpenBSD Soy Latte page.
Friday, December 5, 2008
Creating a new ESL language compiler and verifying it
Fastpath Logic, Inc., over a three year period, built a world class ESL compiler. The input language is a C++ syntax that is used to define a chip design and verification infrastructure. The language went through several iterations, including a transition from a TCL like syntax, to the final C++ syntax. The language is used to describe those non-algorithmic components in a design and in the verification components. The language ties together the chip design and verification components. The language is split into different categories for the specification of the components that are generated in the reference model, test benches, and RTL design.
Software Pipeline
The development of a compiler requires an architecture that is effectively a software pipeline. The first stages are the lexer, parser, and tree walker (LPW). The next stage is an object model which is populated by traversing the abstract syntax tree and calling object model constructors and build methods. In addition a scope tree is used in the LPW to track the current scope. Scoping is also tracked in the object model. Languages which have classes have functions which can be called to augment the information added to the object by the arguments to the constructor. The object model is modified by a synthesis engine which converts the high level C++ description into the detailed design and verification infrastructure objects. These objects are then traversed by several visitors to generate the reference model, testbench, and RTL infrastructure. We will now describe method used to test the compiler.
Compiler Phases and Debug Messages
The compiler is divided into phases. When the compiler is compiled with the -Ddebug flag set, each phase in the compiler has a begin and end print statement that is printed out, along with any error or warning messages that occur during that compilation phase. The result is a log file that contains a set of begin/end messages and any errors or warnings in the individual phases.
Syntax for reporting compiler phase warnings and errors.
BEGIN phase_name
any warnings or errors that occur in this phase
END phase_name
Generating Tests Automatically
The object model needs to check that that arguments to the class functions are correct. This requires a detailed specification that lays out the objects that can be passed in to other objects based on the inheritance hierarchy. For example if an identifier class is inherited from an expression class in the inheritance hierarchy then the identifier may be used in most cases that an expression can be used in other class functions that take an expression argument. However, a boolean expression may not be allowed as an argument for some functions that take expression arguments. Tables must be built that describe the allowed arguments for each function. The test team can then construct automatic test generators using a language such as Perl. The tests are marked as valid or invalid in the test name (e.g. signal_bitrange_invalid.csl). The test generators are broken down along language syntax categories. Each test generator creates a set of tests in a directory. Running all test generators creates a directory hierarchy of tests broken down by language syntax category.
Regressing the Compiler
A script is written to compile each test and save the test and test results in a directory. The regression script creates a directory hierarchy based on the input directory hierarchy.
Hierarchical Regression Report
The regression script creates a hierarchical HTML report with the test results broken down along category showing the status of each of the compiler phases. Each category's results are shown on a row with the result of each compiler phase shown in the cells in the row showing the number of tests that passed that phase.
Checking for memory leaks and major errors
The regression script has an option to run valgrind on the test to check for memory leaks. If valgrind is a regression script option then the results of running valgrind is shown on each category row. The regression script inspects each test's log for major errors such as asserts and segmentation faults. A count of major error is shown on each category row. The total number of tests passing and failing are shown in the category row. The percentage of passing tests is shown in each row. The test generators generate directories of tests that contain either all valid or invalid tests. If the category test directory is valid then all tests are expected to pass. If all cells in the row should be green if the category passed. If the category test directory is invalid then all tests are expected to fail and all cells in the row should be red if the category passed.
Color coded output
At the bottom of the top level report the results of each column are displayed. We will now describe the color coding scheme used for valid tests. Invalid tests use the opposite color listed below. Cells are color coded so that the viewer can quickly see if all tests have passed and where the trouble spots are located. If 100% of the valid tests in the category pass, the cell has a green background. If the phase has warnings and no errors the cell has a yellow background. If the phase has 1 or more errors the cell background is red. In addition the result of compiling the different output languages for the category is shown. The automatically generated C++ reference model is compiled with GCC.
The report below shows the results on a local developers machine. They broke local build and the golden regressions are failing. They have to correct the bug(s) prior to checking in the code. The cells should all be green.

The summary of the regression is shown at the bottom of the regression report.

Drilling down into the categories
Clicking on a cell on row in the top level HTML report brings up a new page with the tests from the category. Note that there may be more than one page per category that lists the tests. In some instances hundreds of pages with category test results are generated which is why the top level summary for each category is important. Each of the individual tests are listed on a row. the compiler phase status for the test is shown on each cell in the row in the same way that the overall summary row is organized as described above.
Clicking on a test name brings up a new window with the test source code shown. Clicking on a test category compiler phase cell brings up the test log file as a result of compiling the test. Each compiler phase begin/end and any warnings and/or errors are shown in the log file. The cells are color coded to show the pass/fail status of the compiler phase.
Each category has a list of tests and the results of the phase is color coded.

The summary of the categories tests is shown below.

We used the Perl HTML template library to create the hierarchical web pages.
Check in regressions
Developers are required to run a golden regression which contains tests which must pass prior to check in.
Hourly regressions
Hourly regressions are run using luntbuild on a server to verify that the build is not broken. First the latest source code revision is checked out from the repository. Then the compiler is built. If the build succeeds then the hourly golden regression is run. The summary results from the hourly regression are emailed to the team. The HTML regression report for each regression is available on the build server. Users can view the results via a web browser.
Nightly regressions
Due to computing resource limitations a longer set of tests is run only once a day.
A shorter list of tests are run using valgrind.
Code coverage tools are also run.
Weekly regressions
Due to computing resource limitations a significantly longer set of tests is run only once a week.
A longer list of tests are run using valgrind.
Code coverage tools are also run.
Summary
The HTML reporting mechanism shows managers, developers, and testers a summary of the compiler regression. Regressions may be tracked over time to see the trend in compiler development. The results can be graphed to see if the project is making froward progress.
Software Pipeline
The development of a compiler requires an architecture that is effectively a software pipeline. The first stages are the lexer, parser, and tree walker (LPW). The next stage is an object model which is populated by traversing the abstract syntax tree and calling object model constructors and build methods. In addition a scope tree is used in the LPW to track the current scope. Scoping is also tracked in the object model. Languages which have classes have functions which can be called to augment the information added to the object by the arguments to the constructor. The object model is modified by a synthesis engine which converts the high level C++ description into the detailed design and verification infrastructure objects. These objects are then traversed by several visitors to generate the reference model, testbench, and RTL infrastructure. We will now describe method used to test the compiler.
Compiler Phases and Debug Messages
The compiler is divided into phases. When the compiler is compiled with the -Ddebug flag set, each phase in the compiler has a begin and end print statement that is printed out, along with any error or warning messages that occur during that compilation phase. The result is a log file that contains a set of begin/end messages and any errors or warnings in the individual phases.
Syntax for reporting compiler phase warnings and errors.
BEGIN phase_name
any warnings or errors that occur in this phase
END phase_name
Generating Tests Automatically
The object model needs to check that that arguments to the class functions are correct. This requires a detailed specification that lays out the objects that can be passed in to other objects based on the inheritance hierarchy. For example if an identifier class is inherited from an expression class in the inheritance hierarchy then the identifier may be used in most cases that an expression can be used in other class functions that take an expression argument. However, a boolean expression may not be allowed as an argument for some functions that take expression arguments. Tables must be built that describe the allowed arguments for each function. The test team can then construct automatic test generators using a language such as Perl. The tests are marked as valid or invalid in the test name (e.g. signal_bitrange_invalid.csl). The test generators are broken down along language syntax categories. Each test generator creates a set of tests in a directory. Running all test generators creates a directory hierarchy of tests broken down by language syntax category.
Regressing the Compiler
A script is written to compile each test and save the test and test results in a directory. The regression script creates a directory hierarchy based on the input directory hierarchy.
Hierarchical Regression Report
The regression script creates a hierarchical HTML report with the test results broken down along category showing the status of each of the compiler phases. Each category's results are shown on a row with the result of each compiler phase shown in the cells in the row showing the number of tests that passed that phase.
Checking for memory leaks and major errors
The regression script has an option to run valgrind on the test to check for memory leaks. If valgrind is a regression script option then the results of running valgrind is shown on each category row. The regression script inspects each test's log for major errors such as asserts and segmentation faults. A count of major error is shown on each category row. The total number of tests passing and failing are shown in the category row. The percentage of passing tests is shown in each row. The test generators generate directories of tests that contain either all valid or invalid tests. If the category test directory is valid then all tests are expected to pass. If all cells in the row should be green if the category passed. If the category test directory is invalid then all tests are expected to fail and all cells in the row should be red if the category passed.
Color coded output
At the bottom of the top level report the results of each column are displayed. We will now describe the color coding scheme used for valid tests. Invalid tests use the opposite color listed below. Cells are color coded so that the viewer can quickly see if all tests have passed and where the trouble spots are located. If 100% of the valid tests in the category pass, the cell has a green background. If the phase has warnings and no errors the cell has a yellow background. If the phase has 1 or more errors the cell background is red. In addition the result of compiling the different output languages for the category is shown. The automatically generated C++ reference model is compiled with GCC.
The report below shows the results on a local developers machine. They broke local build and the golden regressions are failing. They have to correct the bug(s) prior to checking in the code. The cells should all be green.

The summary of the regression is shown at the bottom of the regression report.

Drilling down into the categories
Clicking on a cell on row in the top level HTML report brings up a new page with the tests from the category. Note that there may be more than one page per category that lists the tests. In some instances hundreds of pages with category test results are generated which is why the top level summary for each category is important. Each of the individual tests are listed on a row. the compiler phase status for the test is shown on each cell in the row in the same way that the overall summary row is organized as described above.
Clicking on a test name brings up a new window with the test source code shown. Clicking on a test category compiler phase cell brings up the test log file as a result of compiling the test. Each compiler phase begin/end and any warnings and/or errors are shown in the log file. The cells are color coded to show the pass/fail status of the compiler phase.
Each category has a list of tests and the results of the phase is color coded.

The summary of the categories tests is shown below.

We used the Perl HTML template library to create the hierarchical web pages.
Check in regressions
Developers are required to run a golden regression which contains tests which must pass prior to check in.
Hourly regressions
Hourly regressions are run using luntbuild on a server to verify that the build is not broken. First the latest source code revision is checked out from the repository. Then the compiler is built. If the build succeeds then the hourly golden regression is run. The summary results from the hourly regression are emailed to the team. The HTML regression report for each regression is available on the build server. Users can view the results via a web browser.
Nightly regressions
Due to computing resource limitations a longer set of tests is run only once a day.
A shorter list of tests are run using valgrind.
Code coverage tools are also run.
Weekly regressions
Due to computing resource limitations a significantly longer set of tests is run only once a week.
A longer list of tests are run using valgrind.
Code coverage tools are also run.
Summary
The HTML reporting mechanism shows managers, developers, and testers a summary of the compiler regression. Regressions may be tracked over time to see the trend in compiler development. The results can be graphed to see if the project is making froward progress.
Thursday, December 4, 2008
Ruby + Rails + Hpricot on a Mac OS X
How to install Ruby + Rails + Hpricot on a Mac OS X from source on
Mac OS X Leopard (10.5)
If you want to use MySQL go to this page and install it per these
instructions.
Go to this site and follow the directions in this post to install Ruby and rails.
Check your Ruby configuration:
ruby -rrbconfig -e "p Config::CONFIG"
Install Hpricot
sudo gem install hpricot
Try out irb (ruby command shell)
Check that the correct ruby environment is being used
$ gem env gemdir
/usr/local/lib/ruby/gems/1.8
Check that the correct ruby executable is being used
derek-pappas-computer:~ dpappas$ which ruby
/usr/local/bin/ruby
$ ruby -v
ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin9.5.0]
Check the installed gems
gem list
Check that the hpricot bundle is in the Mac OS X gems dir
find `gem env gemdir`/gems/hpricot-* -name \*.bundle -ls
Install hrpicot into the lib the easy way
sudo gem install hpricot
Check that hpricot gem is in the load path
ruby -rubygems -e 'require "hpricot"'
Mac OS X Leopard (10.5)
If you want to use MySQL go to this page and install it per these
instructions.
Go to this site and follow the directions in this post to install Ruby and rails.
Check your Ruby configuration:
ruby -rrbconfig -e "p Config::CONFIG"
Install Hpricot
sudo gem install hpricot
Try out irb (ruby command shell)
Check that the correct ruby environment is being used
$ gem env gemdir
/usr/local/lib/ruby/gems/1.8
Check that the correct ruby executable is being used
derek-pappas-computer:~ dpappas$ which ruby
/usr/local/bin/ruby
$ ruby -v
ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin9.5.0]
Check the installed gems
gem list
Check that the hpricot bundle is in the Mac OS X gems dir
find `gem env gemdir`/gems/hpricot-* -name \*.bundle -ls
Install hrpicot into the lib the easy way
sudo gem install hpricot
Check that hpricot gem is in the load path
ruby -rubygems -e 'require "hpricot"'
Subscribe to:
Posts (Atom)


