Danie Roux

People person, change agent. Journeyer through problem and solution space. Interested in being interested.

Danie Roux header image 1

strace, I love you

February 10th, 2007 · general

Quick post:


strace -e trace=network

to quickly figure out where a process is connecting to.

This traps only system calls that the process makes regarding networking. Very useful to make sure that your process believes the same things about servers and ports that you do.

(Turns out the process needed to be told that 443 is not 8443)

→ No CommentsTags:··

Recovering your deleted photos from your digital camera

February 4th, 2007 · general

PhotoRec can recover your deleted photos from your digital camera or memory card.

All memory cards1 use the FAT32 file system from Microsoft. This filesystem does not physically delete files, but just marks them as deleted.

I needed to a tool to quickly let me recover or undelete all those photos. An “apt-cache search dos recovery” in Ubuntu yielded this beauty:


It contains a utility called PhotoRec which scans any device for any filetypes that you specify.

TestDisk also performs some more serious recovery functions, so bookmark it for a rainy day.

1 As far as I know?

→ No CommentsTags:····

When suggestions go wrong

January 16th, 2007 · general

I have been slowly adding my books to LibraryThing. I quite like it and I’m hoping that it will help me discover some new books and authors. Based on what I own, LibraryThing makes suggestions of other books I might like.

But LibraryThing has another, unusual, feature which they call UnSuggester:

“Unsuggester takes ‘people who like this also like that’ and turns it on its head. It analyzes the nine million books LibraryThing members have recorded as owned or read, and comes back with books least likely to share a library with the book you suggest.”

This does not work so well. My current “unsuggestions” includes this one:

Now, Wuthering Heights is one of my favourite novels of all times. I own it, and I own most of the Bronte sisters’ books. But I also have ANSI Common Lisp on my Amazon Wishlist.

With suggestions you can easily find the cluster of books someone is interested in and extrapolate to related ones.

But in this case, there are two clusters (books on functional programming and classic English literature) and I own books in both. You can not assume that people only belong to only one cluster at a time. The opposite of Lisp1 is not Wuthering Heights.

All said, Its an entertaining feature, Unsuggester. The unsuggestions of both ANSI Common Lisp and Wuthering Heights reminded me of a few books I still need to read.

1 It might be Java ;-)

→ 1 CommentTags:···

Simple demonstration of Haskell FFI

January 1st, 2007 · general

Update: Incorporated changes by Adam Turoff.

I played with Haskell FFI today. It was mostly straight forward, but to save someone some time I present you with the simplest example of calling into a C library you wrote. You can download the whole working tarball from here.

Most languages provide some FFI to bridge to allow you to call at least C libraries. Some languages, like Ruby and Python1, have very elaborate hooks into C, because they themselves are written in C.

Haskell is written in Haskell (strange enough) and does not require that your C library knows anything about Haskell. You write the library and expose it as one. In Haskell you can then do a “foreign import” and have that function available for use in Haskell code. The definitive reference for all of this is: The Haskell 98 Foreign Function Interface 1.0.

We will be calling this C function:

    void buzz(int array_length, int array[]);

The Haskell code that imports this function is the following:

    foreign import ccall "buzzlib.h buzz" my_buzz :: Int -> Ptr (Int) -> IO ()

The only extension that needed to made to the Haskell standard was the introduction of the word “foreign” in the above statement. Broken down, this import statement does the following:

  • buzzlib.h is where the function is declared and buzz is what the function is called.
  • We then import it into our world as my_buzz
  • “Int” is the first argument (array_length) and “Ptr (Int)” refers to a pointer to an array of integers.

        let second_argument_to_buzz = [ 1, 2, 3, 4, 5 ] :: [Int]
            first_argument_to_buzz = length second_argument_to_buzz
        ptr <- newArray second_argument_to_buzz

The first two lines declare the values that we will be passing to the external function. Then we marshall the list “second_argument_to_buzz” into something we can pass to the C function.

newArray allocates enough memory for values, and then “pokes” the contents of the “second_argument_to_buzz” list into that area of memory and returns a pointer.

We are now ready to finally call our function:

        my_buzz first_argument_to_buzz ptr
        free ptr

We can pass the first value as is, because it is a primitive value. The second value we had to first marshall as we have done in the previous snippet of code and we now have to send the pointer to the external library.

We then have to free that block of memory once we are done with it.


Adam Turoff simplified this a lot, unfortunately I lost the comments to this article. The simplified version of the call is:

       withArray second_argument_to_buzz_function (my_buzz first_argument_to_buzz_function)

The only remaining question is how to actually compile all of this. For that, have a look at the Makefile in the archive.

1 I prefer Ruby’s FFI over Python’s. Granted, I last did a C library for Python in the 1.6 days.

→ 2 CommentsTags:····

Getting all your mail out of Gmail

December 22nd, 2006 · general

Google Apps launched a few weeks ago. This week I wanted to transfer my email from the old email account (the one that ends with @gmail.com) to the new one. I thought it would be as easy as POP’ing1 all the email from the old account with mutt and then bounce it straight onto the new email address.

That worked. But only if I wanted to transfer only a third of the email. Or more correctly, all the email after 17 December 2005. I enabled POP in Gmail and got this irritating message:

”POP is enabled for all mail that has arrived since 12/17/05”

Now, I’ve had this Gmail account since April 2005 and there is a lot of email there that I do not want to lose. But Google only gives me access to one third of that! I submitted a request via the Gmail Help Center and all I got was a generic message stating that “my request will be forwarded to the teams”. Not helpful.

I have to stress this again: Gmail does not give you POP access to all your mail. Beware.

So I went in search of alternative solutions to get to all my mail. I found libgmail, a Python library that “provide access to Google’s Gmail web-mail service”. It also ships with “one demonstration utility to archive messages from an Gmail account into mbox files”.

I needed to hack archive.py1, the demonstration utility, a bit to make it more robust to network failures. I also added the ability to resume2 after a total breakdown.

After all that, I am now in the possesion of all of my own mail.

So why does Google make it so difficult? I thought that my data will be my data and accessible when I wanted it? I will have to be a lot more on my guard about Google from now on.

1 archive.py is part of the libgmail-docs package.

2 Necessary feature for the unreliable bandwidth I have access to.

→ 2 CommentsTags:·····

South Africa can finally create Paypal accounts again

December 17th, 2006 · general

UPDATE (25 March 2010): South Africa can now receive money as well. The Daily Maverick (which you should be reading anyway) has the whole story.

We South Africans are now again allowed to create Paypal accounts. Go ahead, try it. You will be up and ready to pay/receive money in under 10 minutes.

Seven months ago was the last time I tried to create a Paypal account. I was greeted with the familiar message “because of banking laws in South Africa you can not create an account”. Or some such legal reason.

The reason I wanted to do that then was to sponsor Vim. Tonight I was going to prepare for the final step of Moneybookers registration so that I can finally tick of the goal I set of to do a year and a half ago.

But realising that I am still too lazy to find a scanner and that I am not really sure it will be the end of the epic Moneybookers registration journey; I tried Paypal again for luck. Lo and behold, it worked! Within 10 minutes I was registered and I made my donation to Vim.

Compare this to the pain of Moneybookers. Seven months ago Vim made it possible to donate via Moneybookers. I was very excited to see this. I created the account and hoped to be using my credit card within days (at the utmost) to start throwing money at people. I was wrong:

Moneybookers first deducts some random amount under €2. You have to then check on your credit card statement to find that number, so that you can verify it with Moneybookers. Since my online statement only shows the amount in my own currency, I had to wait a full 4 weeks to get my paper statement. I then typed in the magic number, hoping to really now start laying out the cash – but then Moneybookers insisted that I now send them a copy of the paper statement. Preferably via fax. Scanned if I really have no other choice.

So I lost interest. Faxing via a long distance call does not appeal to me and I don’t have easy access to a scanner.

With Paypal, they only need you to go through the verification pain if you want to receive or spend more than $100. Huge positive difference from Moneybookers, since I don’t intend to spend more than $100 per payment anytime soon.

The verification pain from Paypal also consists of a random amount substracted from your account. The difference here is that they put that amount in the description of the transaction, so I can go read it off as soon as it appears on my online statement.

Lesson to be learned: People are lazy, make it easy for them to throw money at you.

Thank you Paypal for finally being in South Africa. I will make use of your services often but in small amounts.

→ 9 CommentsTags:····

Distributed version control: The perfect fit for the bazaar

September 15th, 2005 · general

The problem: Client-Server needs a gatekeeper

CVS has been the preferred version control system for Open Source projects for a very long time now. CVS in its current form was created in 1989! It has a number of limitations, most notably in renaming files and directories (you can’t). So some core developers of CVS came together and created Subversion to fix the issues in CVS.

Subversion is excellent, I use it daily at work where we manage a lot of projects with a lot of code. It has a number of cool features that fits in well with the requirements of my work environment. Restricting access to certain repositories and even certain parts of a repository is easy, for example.

It is great to have all the code in a centralized repository, centrally managed with the gatekeeper of the cathedral handing out access to those that are worthy and blessed.

But wait, Open Source need no such gatekeeper. The code is open for all to see, modify, redistribute and even badly mangle. As esr says in the Cathedral and the Bazaar:

“Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.”

So why are we using a version control system that insists on handing out write access to the repository? Sure, you need quality control. Meritocracy is a good model.

But what if someone wants to take the code in a direction that does not immediately seem like a good idea? Or just a direction no lead developer is interested in?

Well, that someone is more than welcome to do so. The problem is that that causes a fork. Two code bases, two groups. If they get merged back in the future, it is difficult to retain the code history of each project.

The solution

The solution is to allow the developers to make their own branches of the code, without anyone’s permission. Distributed Version Control Systems makes this possible.

If code is stored in a DVCS, anyone can decide to branch and start working on that branch. The big advantage is that early adopters can choose to start using this new branch and provide the rapid feedback that could shape this branch into something that could be folded back into the main tree.

The flipside of the coin is: No permission is needed to fold back code in a branch back into the main branch (politeness aside).

So what we have then is a complete bazaar of code, where code and ideas flow from one group to a other and back again. Where one main group becomes a two smaller groups and dynamically merge back to one at some later stage. With full history of all the events and all the changes.

DVCS in action

Darcs-git started me thinking on this. It started of as an unsactioned, unblessed fork/branch from darcs. It became stable enough to be merged back into darcs-unstable. This is the bazaar working in all its glory.

Disadvantages of DVCS

Since there is no single location of the code, there is no single main trunk either. The only way to have such a trunk is by convention. Naming, or otherwise. This can be confusing for new users, but in reality its not a problem.

The other problem with DVCS is that it is not widely understood or trusted. That will of course change with time as more and more people turn towards it.

Current DVCS systems

A great overview of current systems is David A. Wheeler’s Comments on Open Source Software / Free Software (OSS/FS) Software Configuration Management (SCM) Systems.

→ No CommentsTags:····

Learned enough Tcl to be dangerous

August 18th, 2005 · general

We got new toys at work. Two BIG-IP load balancers from F5.

Seriously cool devices. Some highlights include:

  • It runs Linux kernel 2.4 in conjunction with some microkernel OS that F5 calls TMOS.
  • Customized, interactive monitors for server availability can be written in Perl.
  • Traffic flow can be modified using Tcl scripts. Basically anything can be inspected and modified, on TCP level as well as HTTP. The example given is sending IE users to their own server1.

The application we deployed behind these load balancers insists on sending back a HTTP 302 redirect that forces the client to continue on plain HTTP. But we terminate SSL on the load balancers, so we really want that redirect to go back to the client as HTTPS.

So, to get this done I picked up Tcl again tonight. TclTutor was helpful to quickly grasp the basics.

I then started to write the script. To test it, I started to stub out the library functionality. Before I knew it I was reading up on namespaces in Tcl. I then realized that they are not using Tcl on the BIG-IP’s. It is in fact [incr Tcl]. So now I can do Object Oriented programming in Tcl. Well, enough of it to attempt to tickle off my foot.

1 Brilliant idea that: “No sir, it must be your browser causing the slow response time, upgrade to a decent browser.”

→ No CommentsTags:·

Using GNU Stow to manage source installs

August 7th, 2005 · general

Installing packages from source used to be the only way to get software onto your Linux box. SLS was the first real Linux distribution, but I believe Slackware was the first to offer package management, with Debian to offer the first package management with dependency resolution.

Since then, source installs became a problem. Installing your own version of a package over the version provided by the distribution often leads to something breaking.

Luckily, the Filesystem Hierarchy Standard makes provision for this, by specifying that your own source installs should go into /usr/local.

Most software that uses GNU Automake installs into /usr/local by default. The questions you should be asking are:

  • How can I be sure that the software does not trample over anything already installed by overwriting something?
  • How do I keep multiple versions of software installed?
  • How do I manage (add, safely delete) the software I install into /usr/local?

I discovered GNU Stow years ago. It was written specifically as an answer to all those questions. It comes standard with most distributions, there’s even a package for Fedora1 .

I am still surprised that I don’t see many people advocating it. That may be because the manual is big and people are too lazy these days to read through it. Hopefully if you finish reading this you’ll know how to use Stow and understand why you would want to.

In a nutshell: Stow has its own directory, /usr/local/stow. You install all the software into that directory. You then invoke Stow, which makes symlinks into the /usr/local hierarchy. That’s all Stow does. It never copies or removes files. It only deals in symlinks.

So, you can compile and install all your software without using root priviliges. Only when invoking Stow do you need to become root. The advantage of this is the answer to the first question: Since you are installing the software into /usr/local/stow as a normal user, you can be sure that no files are placed anywhere else, nor are any of your system files changed.

Installing software with Stow:

For example, lets install a package called fubar:

In the source directory, as a normal user, run:

./configure --prefix=/usr/local
make PREFIX=/usr/local # Only some Makefiles need this2
make install prefix=/usr/local/stow/fubar

Then, become root, usually using the command su.

Now you need to do this:

cd /usr/local/stow
stow fubar

And that’s it. For the rest of the system it will look as if fubar was installed directly into /usr/local. In reality, it was installed into /usr/local/stow/fubar, so you know exactly what was added to your system.

Uninstalling software with Stow:

Well, you can’t. Stow doesn’t ever delete any files. Ever. It only makes and deletes links. You can thus delete all the links to the package and you do this using:

cd /usr/local/stow
stow --delete fubar

The rest of the system won’t see fubar anymore. Deleting /usr/local/stow/fubar will completely remove it from your system.

This leads us to the next great feature of Stow:

Keeping multiple, self-compiled versions of a package around

Since GNU Stow doesn’t remove any files and since they will link anything in the /usr/local/stow hierarchy into /usr/local, you can keep more than one version around.

Let’s say there’s a fubar-1.0 that you installed. Now you want to install the CVS version to test it out3. This is how you would do it:

In the directory where you have the CVS source code, do the same as when you installed fubar. Just change the make install to:

make install prefix=/usr/local/stow/fubar_cvs

Now, unlink the fubar-1.0:

cd /usr/local/stow
stow --delete fubar

Then, Stow the CVS version:

stow fubar_cvs

You can easily switch between these two using only Stow. So if the CVS version is a bit too bleeding edge, its easy to fall back to stable.

1 This is a bit of a jab at the poor people that have not yet converted to
the greatest distribution of them all.

2 This is a convention found in many Makefiles. the VARIABLE=VALUE on the command line is how you can change the value of a variable in that Makefile.

3 These days you hopefully get a Subversion repository instead.

→ 1 CommentTags:··

Time for a walk on the Seaside again

July 29th, 2005 · general

With my studies under control (for a certain given value of “control”) I feel like taking a walk on the Seaside again.

So tonight I grabbed the newest Squeak version from the official site. I then followed the instructions on the Seaside site and soon had a running Seaside application again.

One annoying thing though, it seems as if the proxy user and password is ignored by HTTPSocket. So I bypassed my local proxy and ignored the issue for now.

To get developing again, I installed all the packages I consider crucial to productive development in Squeak:

  • Whisker: Whisker with its stacked browsing approach works extremely well. One window with all the code I’m working on open feels much better to me than a lot of open browsers.
  • Shout: Code highlighting for Squeak. Call me spoiled, but boring black text is just not fun anymore. Install ShoutWhisker as well.
  • TestBrowser: I like Eclipse’s integration with JUnit. The standard SUnit Test Runner in Squeak looks almost defunct compared to it. TestBrowser makes running tests fun again.
  • SVI: I kept the best for last: SVI is by far my favourite Squeak development extension. It gives me VI bindings everywhere in Squeak (it does Emacs too, but who cares about that). It also enables tab completion which is incredibly useful. Of course, since Smalltalk is not statically typed it doesn’t get it right all the time. It works well enough to save me lots of time though.

Ok, I need to stress this: SVI gives you VI bindings in Squeak. For someone who believes that bash should by default be “set -o vi”, this is a killer feature.

I am writing a little application, with the potential to grow to something actually useful. Under these controlled conditions, I decided to experiment with Magma. Object databases are just so much more fun to work with, even though Object-Relational mapping is getting a
lot better these days.

→ 1 CommentTags:···