Thursday, March 21, 2013

Design by committee works slowly

The latest draft for polymorphic lambda expressions, which I advocated for in a post about 3 thousand years ago, is a step in the right direction. I greatly appreciate the time that the authors are taking to push C++ forward. I know they do it on a volunteer basis and I believe their passion for it makes C++ one of the best languages to use for a variety of projects. On reading the draft though, I'm still a little underwhelmed.

Lambda expressions are anonymous functions that are common in languages with first-class functions like Lisp. Roughly, the language gives you the ability to create functions "at runtime" which allows you to store data and other state. Once this is possible, anything is possible. You can read more about anonymous functions at Wikipedia.

When a programmer uses anonymous functions, he or she is not doing it for a technical reason (i.e., performance) They are doing it for one or all of the following reasons:

  • Lexical locality: The data that the anonymous function will be operating on is somewhere nearby and we just need to do a little transform on it to make it useful to something else which is also nearby.
  • Readability: x => 2*x+y is much easier to read and understand than MyFunctor f(y) because you need to go look up the definition of MyFunctor.

In x => 2*x+y, you can see that the 'y' value must come from somewhere else in the function: capturing data in lexical scope is an important part of anonymous functions. This is the reason why MyFunctor takes in 'y' as a parameter.

Anyway, as my post tried to explain, one of the main problems is the ridiculous verbosity implicit in monomorphic lambda expressions. By allowing polymorphic lambdas, the verbosity has a chance to be reduced or even eliminated to the simplest possible thing. The latest draft makes an "auto" necessary on lambda expression parameters.

To recap, C++11 lambda expressions transform a statement like:

    [](double slope, double intercept, double x){ return slope * x + intercept; }

into a function object not completely unlike:

    struct LOL
    {
        double operator()(double slope, double intercept, double x){ return ... ;}
    };

Most lambda expressions will only ever be used with one set of parameter types and in one situation so it is not hard to understand why this is one acceptable syntax. However, languages like C# have much more concise syntax for the above case:

   slope,intercept,x => slope * x + intercept

The compiler figures out the types since it is a statically typed language and everyone is happy.

Before lambda expressions, in C++, we might have written:

    namespace bl = boost::lambda;
    ...    bl::_1*bl::_2 + bl::_3 ...

My goal for C++ lambda expressions would be to never use any of the Boost lambda libraries again, as useful and awesome as they are. With the new draft, the C++11 version becomes:

    [](auto slope, auto intercept, auto x){ return slope*x + intercept; }

As you can see, the above Boost Lambda form is arguably still preferable to the draft version of polymorphic functions just on length alone. And although it is longer, it is slightly easier to read and understand because of the named parameters. But why can't we spoil ourselves? There aren't too many technical tricks required to automatically turn:

    [](slope,intercept,x){ return slope*x + intercept; }

into the same form behind the scenes.

In my humble opinion, the auto actually adds nothing to readability and takes it away because I am required to read more to understand what is going on. Multiply this by thousands of expressions and multiple projects and it is just another thing I have to skip over. There is actually very little reason to require auto. With this extension, it is still easier to use Boost Lambda

The 5 people who voted "strongly against" making auto optional should rethink their votes. This is the best chance we have of getting it right the firstsecond time.

Sunday, May 29, 2011

Learning about Bitcoin or why I'll never use Bitcoin

Bitcoin is quite a promising e-currency. Created by some-guy-we-don't-really-know-or-a-double-agent-of-some-kind-who-is-probably-quite-Bitcoin-rich-now, it has some very useful properties:

  1. Creation of the money is an implicit and transparent agreement between users. That is, there is no centralized issuing authority and there is a finite quantity. Almost like gold.
  2. It is completely electronic and therefore very cheap to transfer. As a result, transaction fees are "low".
  3. Transactions are anonymized, yet completely public to avoid against double-spending.
  4. Some crypto stuff to make sure it is as secure as it can be today

The main desired outcome of a currency with these rules is autonomy of the currency from the somewhat arbitrary influence of centralized planners.

How you are supposed to use Bitcoin

So how do you use Bitcoin (BTC) as a consumer or vendor? Let us assume that you already have some BTC in your account.

  1. Visit place of business
  2. Locate item of interest which costs 0.02 BTC
  3. Go to cashier
  4. Pull out smart phone with your Bitcoin wallet or some kind of link to your Bitcoin wallet
  5. Use QR-code at register to find vendor's payment address. This address will likely be generated at the point of purchase
  6. Send Bitcoin to that address from your Bitcoin wallet
  7. The cashier and network verifies your payment (speed depends on transaction fee) and you go on your way

This is how it would work today if a business accepted BTC. I expect that if I am wrong and if Bitcoin does indeed take off, there will be clearing houses to speed up transactions like these. I think that these confirmations will necessarily be done outside the network but eventually, the network will also validate these which will be the final settlement step.

This exchange is appealing for various reasons. My favourite one is that the users of the system itself benefit by confirming transactions. That is, you can make Bitcoin just by verifying transactions.

Bitcoin Wallet

It is probably useful to discuss where Bitcoins are stored. This location, a file on your hard disk, is called a wallet. It consists of a set of private keys that correspond to each address generated as in the above scenario. This is your vault. If it is stolen in unencrypted form, your money is probably as as good as gone. But the coolest part is that if you have a backup and it was encrypted, you simply transfer the money to an account in a new wallet before the thieves are able to crack the encryption and almost by magic, your money is back again.

Anonymous vs Anonymized

Earlier, I said that transactions are anonymized. This is different from them being anonymous because an anonymizing technology does not imply anonymity. A transaction being anonymous means untraceable which is something that is quite easy to disprove in the BTC world.

Let's start at the beginning. How do you get BTC? There are a couple of ways. One way involves a lot of geekery and stuff that very few people have time for. This is called Bitcoin mining. For most people, just outright buying BTC like they buy USD is the most convenient. Currency is a proxy for labour so it is fine to buy BTC. As the market will continue to be volatile due to the simultaneous debasing of the USD, demand-side pressure as well as the continuous creation of BTC, I would spread out bigger purchases over a few months.

A convenient way to buy BTC is through an exchange. So let us walk through that process:
  1. Create an account with a BTC exchange. I used Bitcoin Market. This requires you to give them two things: an email address and a Bitcoin payment address. Notice how your email address is tied to your BTC address.
  2. Figure out the trade you want to make. I used BMBTC for PPUSD where BM = Bitcoin Market and PP = PayPal.
  3. Execute the trade by making a payment to some email address on PayPal.
When I executed this process, it took a total of 15 minutes for the trade to complete but it was a full hour before the money was in my actual wallet and verified by the network. You must note that this is the equivalent of someone on the other side of the world paying me $10 and someone delivering that $10 to me personally. Not to a bank account, not a promise for $10, but cold hard cash to me personally.

Notice that the process of conveniently buying BTC itself has multiple weak links:
  1. Your email address is tied to a Bitcoin address by Bitcoin Market
  2. Paypal knows who you are definitively through the use of your credit card
  3. Some random dude knows you bought some BTC
To avoid leaking too much information, you can create a new receiving address for every trade and update it on the Bitcoin Market. Note that Bitcoin Market has full trade information and PayPal has amount information. To reduce the risk there, you can use anonymizing email services or a special email just for Bitcoin purchases.

The main point is that once you use a credit card or a personal email address, your anonymity is compromised.

That's not such a big deal, to be honest. After all, you already trust a lot of people with your information online.

De-anonymizing the transactions

If the seller of the BTC was interested in which address bought the BTC through the exchange, s/he would just track the blocks for the specific amount.

When I purchased my BTC, I chose 2 BTC to see how difficult it would be to find in the block explorer. It was pretty easy! Why? Because I knew there would be three related transactions: one for 2, one for 1.99 and one for 0.01 (transaction fee by exchange.) The seller would know this as well.

So all I did was wait for a few blocks to come through the explorer and opened them all up in a browser tab and searched for 1.99. It took less than a minute.

So now, the seller of the BTC has tied my name (through Paypal) to an address.

You may be interested in the actual transaction as currently being confirmed by computers worldwide. Because of this decentralized confirmation, it is now impossible for the seller to re-sell the same BTC to someone else.

Using my Bitcoin or why I'll never use it

Can you figure out what I did with my BTC? Actually, you have all the information you need in this blog post. Once you figure it out, you'll understand why I'll never use it. The first person to add a comment with the right answer and their Bitcoin receiving address will get the remainder of my balance transferred to their Bitcoin address. It's not much, but I probably won't use it...

How to stay anonymous

There are ways to stay anonymous by obfuscating the block chain. However, this is not right. For a currency to be useful, its primitive form must be practically anonymous and not just anonymizing.

How I'd change Bitcoin

My main issues with Bitcoin:
  • Not anonymous: Identity "anchors" are very easy to establish by transacting with people as described above. This leads to a situation where an attacker can find out what you spend your BTC on for their own nefarious purposes.
  • The currency has no decay value. That is, it can be hoarded without consequence. I would like BTC to expire so that the currency can keep circulating. This maintains the value of the currency but prevents hoarding. The block chain has enough information to do this. Miners should be interested in this because it means they can continue to mine forever and keep a healthy Bitcoin economy.
I think the anonymity problem is the most hard to solve. I am only concerned with the ability to transfer coin between my own accounts easily without notifying anyone else. If some way could be devised to solve these problems, goodbye centralized currencies.


Tuesday, April 26, 2011

Deconstructing a dependency injection-driven application

I've been using my C++ dependency injection library for a project in the last year and it's gone pretty well. There are a lot of rough edges but I thought it could be interesting to the 3 of you still subscribed to this blog to de-construct the stock quote application.

About the application

The example itself is pretty straight forward. You have a choice of 3 stock quote providers: Yahoo!, static and phone. You choose one and ask for a stock quote. Magic happens and your stock quote arrives.

Example session (with some debug output)

Welcome to the DI Stock Quote App. Simplifying and complicating software development since 2010.
Which stock quote service would you like to use?
1: static
2: phone
3: yahoo
Enter your choice (1-3) and press enter: 3
You chose: yahoo
[DICPP]: No scope constructing: di::type_key<YahooStockQuoteService, void>
[DICPP]: Constructing: di::type_key<di::typed_provider<HttpDownloadService>, void>
[DICPP]: Completed constructing: di::type_key<di::typed_provider<HttpDownloadService>, void> with address: 0x100750
Stock symbol (type quit to quit): goog
[DICPP]: No scope constructing: di::type_key<HttpDownloadService, void>
[DICPP]: Constructing: di::type_key<boost::asio::io_service, void>
[DICPP]: Singleton: constructing: di::type_key<boost::asio::io_service, void>
[DICPP]: Completed constructing: di::type_key<boost::asio::io_service, void> with address: 0x1008a0
Current price for goog: 532.82
Stock symbol (type quit to quit): quit

See how the construction of the HTTP service is automatically delayed until actually needed. This is done through a concept called a "provider" which is basically an automatically generated factory.

About Dependency Injection

A really good introduction to the dependency injection technique as implemented by Guice can be found here. It's probably one of my favourite tech talks of all time.

Anyway, to refresh your memory, here are some of the main benefits of the technique used in Guice:

  • Object construction and lifecycle management is mostly handled for you.
  • Less boilerplate.
  • Makes code more testable.
  • Scopes (~object creation/lifecycle) can be customized by the user.

In short: a lot of the time, you no longer need to allocate objects or pass some object unused down multiple layers of functions or object constructors just to use them once way deep down in some code.

Magic!

I don't really recall how it is done in Guice but in the C++ library linked above, this magic is driven by a type registry which recursively registers constructor arguments as well as user customizations.

In extreme cases, you can initialize an entire application with a few lines of code:

  di::registry r;
r.add( r.type<MyApplication>() );
r.construct<shared_ptr<MyApplication>>()->execute();

This constructs the type registry which is a kind of factory. There is a mini-DSL for describing how you want the registry to handle the type. More on this later. In this case, we are asking the registry to "learn" about the MyApplication type as well as all objects that are required for constructing MyApplication.

"Pish-posh", you say. "MyApplication has a 0-arg constructor. I could do that in my sleep."

Would you be surprised if I said that the MyApplication type actually has 3 arguments?

Well, the above is almost what the StockQuote application looks like. Here is the main function for the stock quote example:

di::injector inj;
inj.install( StockQuoteAppModule() );
StockQuoteApp & app = inj.construct<StockQutoeApp&>(); // lifetime
app.execute();

And here is the constructor for the StockQuoteApp type:

DI_CONSTRUCTOR ( StockQuoteApp ,
( boost :: shared_ptr < UserInterface > ui ,
boost :: shared_ptr < StockQuoteServiceFactory > factory ));

When we ask the "injector" to construct the StockQuoteApp instance, it automatically creates the UserInterface as well as the StockQuoteServiceFactory instance.

The di::injector type is just a thin wrapper around the registry so you can treat it as such. The only thing it really provides is a little bit of syntax to allow you to create modules in a similar manner as Guice. The guts of StockQuoteAppModule accept a registry as a parameter and register the various types. You can see the mini-DSL referred to earlier:

void
StockQuoteAppModule::operator()( di::registry & r ) const
{
// In each module we define the module's root objects, in this case,
// StockQuoteApp as well as implementations/specializations of any
// abstract classes. For example, UserInterface is an ABC and we choose
// the console-based UI here.

r.add(
r.type<StockQuoteApp>()
.in_scope<di::scopes::singleton>() // The reason we can request a reference in the main function!
);

r.add(
r.type<UserInterface>()
.implementation<ConsoleInterface>()
.in_scope<di::scopes::singleton>()
);

r.add(
r.type<StockQuoteServiceFactory>()
.implementation<StaticStockQuoteServiceFactory>()
.in_scope<di::scopes::singleton>()
);

r.add(
r.type<HttpDownloadService>()
.implementation<AsioHttpDownloadService>()
);

r.add(
r.type<boost::asio::io_service>()
.in_scope<di::scopes::singleton>()
);
}

As you can see, the mini-DSL (ugly, ugly, ugly, details) describes a few things:

  • Default implementations for various interface classes. See UserInterface and ConsoleInterface, for example.
  • Life-cycle management. Singleton is mostly used here but you can also have HTTP-session scopes, thread-local scopes or no scopes (as in HttpDownloadService).

What this means is wherever a type T with a DI_CONSTRUCTOR macro is registered, the registry will use these rules described by the DSL to construct any arguments to T.

Providers
In this library, there is a concept of a type called a provider whose sole responsibility it is to construct objects (usually within the constraints of a scope). In the app session above, I pointed out how the HTTP download service is not instantiated until it is actually needed. This is done via a provider. You can see the YahooStockQuoteService has a constructor which accepts a provider and a function which makes use of it.

That should be enough information to peruse the example itself. Check the README as there are a couple of interesting exercises you can try.

By the way, this requires a Boost checkout with a built version of Boost Build. I apologize if you can't get it to build on checkout, but I haven't really focused on having other people use it!

Comments and thoughts welcome.


Tuesday, March 29, 2011

C++ has not jumped the shark

I love John's blog. If you are not subscribed, you should be subscribed. He is one of my favourite bloggers as I actually learn something when he posts.

Somehow, I managed to miss his post on C++ going about as far as it can go in its evolution. Fortunately, it showed up on YCombinator News a few days back so I got the chance to catch up.

From my reading, John is concerned about the following:

  • The language has stopped evolving because it is too long between revisions of the standard.
  • He doesn't need anything new therefore new features are not useful for him.
  • Something about concepts.

I don't intend on refuting or accepting those points as that's not what this is about. I just wanted to give a short summary. Also, I'm not really that interested in concepts but that is no reason to not include them. I'm sure I would also have said "TEMPLATES? WHY DO WE NEED THIS COMPLEXITY?!!!!"

The current C++ standard has been greatly influenced by the various Boost libraries. From lambda to thread, the influence of the Boost development experience is obvious, if not prevalent. Boost made it easy to decide what libraries to include. After all, we've had a few years of practical experience with them.

Reading the list of libraries in first-released order, there are a lot of libraries for various holes in the standard library. The Wikipedia article on the Boost libraries makes it much more clear as to where Boost development has been focused.

There is another trend: the number of libraries dealing with language issues has steadily decreased over time. Now, that is not to say that there will not be another set of C++0B libraries, there probably will. But I don't know if it will trigger the same kind of innovation.

So if that is all true, is C++ over?

In the last few years, there has been a gigantic evolution in C++-land: Clang. I have been fortunate enough to spend some quality time with Clang in the last little while and I have to say that I have enjoyed it a lot more than the last time I spent some time with another open-source C++ compiler.

With Clang, it is reasonably easy to add new features, even easier to add features that translate into combinations of existing features.

So while it was straightforward to add library changes to C++0B due to Boost, it was a lot harder to do the same for language syntax because there was no real experience with many of the proposed features.

Clang can enable, for the language, what Boost enabled for libraries.

However, I don't think Clang is really at the point from an organizational and technical perspective where it can enable and manage the kind of innovation that Boost was able to oversee.

That being said, I look forward to its role in the future of C++. I think it's a bright one*.

* Someone please make C++0B lambda polymorphic.


Sunday, August 15, 2010

Continuous testing with Emacs

The other day, I came across a very interesting paper on continuous testing during development. In it, the authors found that program correctness for a group of students could be predicted based on whether the students used continuous or manual testing. Continuous testing is running your compilation and project tests as you save files locally, as opposed to continuous integration which usually works as you check in code. Manual testing is having the student run the test suite manually. Those students who used continuous testing were a few times more likely to finish the project and had less mistakes.

There is nothing in Emacs preventing a developer for implementing this behaviour for their projects so I decided to do that to simplify the edit-test cycle for a particular project.

The key is that after saving a file, using the built-in Emacs hooks, I launch a compile process. The hook only does so if the file being saved is located in the project directory by looking for a string in the filename, "tmp" in this case.

Here is a screencast of the behaviour with the code in the left side of the split. In the right split, I am editing a script (/tmp/runtest) which represents the test suite. The "project" is located in /tmp. The video shows me saving the test suite file twice. Once with no errors and the second time with an error. In the first case, the compilation buffer goes away once the test suite has run, which keeps things tidy. In the second case, the compilation buffer stays around because an error occurred in the "test suite".

This specific setup works best with a compilation and test phase which runs relatively fast. To make it work for longer test suites, you'd need to probably modify the test-command code to kill the compilation first. You'd probably also want to modify the after-save-hook to use some kind of timer after which you start the compile.

There are lots of things which I'd like to work better, but it works OK for me now.

Let me know how it works out for you if you try it out. The code is here.

Enjoy!


Saturday, May 22, 2010

Dependency Injection in C++/Plugin-based C++ applications

I have made two of my coding research projects available on bitbucket. One is an example of writing a C++ application which can dynamically load and execute Python plugins and the other is an investigation into a Google Guice inspired dependency injection library in C++.

See the wikis for some explanation and browse the code. The code is under WTFPL.

If you're wondering, I chose Mercurial because I do not have the brain capacity to understand Git.


Sunday, December 20, 2009

Using Boost Build on your own projects

While I am a fan of SCons, every now and then I like to dabble in other build systems. One that has intrigued me for some time is Boost Build (BB). You can visit the linked site to find out more about it but in a nutshell, it is a very elegant way to build C++ software.

This post will attempt to give you some steps you can use to get started using the tool on your own projects. Note that it is a bit long but if you are new to Boost Jam as I was a few weeks back, I think it might help you get started. Please feel free to ask any clarifying questions in the comments.

In the following Boost Jam is the build tool and Boost Build is the library on top of the Jam language. I use them interchangeably, and I'm sure people will give me hell for it.

Building Boost Jam


Before getting started, I suggest you build Boost Jam as follows (might as well get Boost too!). I assume you are on a Unix system because there really is no reason to use Windows anymore ;-) but you should be able to get the same results on Windows with some slight modifications.

$ wget http://downloads.sourceforge.net/project/boost/boost/1.41.0/boost_1_41_0.tar.bz2
$ tar -xjf boost_1_41_0.tar.bz2
$ pushd boost_1_41_0
$ export BOOST_ROOT=$PWD
$ pushd tools/jam/src/
$ ./build.sh
$ export PATH=$PWD/bin.macosxx86:$PATH # substitute appropriately

Now, when you type "bjam" at the command prompt, you may get the following output:

$ bjam
warning: No toolsets are configured.
warning: Configuring default toolset "gcc".
warning: If the default is wrong, your build may not work correctly.
warning: Use the "toolset=xxxxx" option to override our guess.
warning: For more configuration options, please consult
warning: http://boost.org/boost-build2/doc/html/bbv2/advanced/configuration.html

error: error: no Jamfile in current directory found, and no target references specified.

The Jam Language


One complaint about Boost Build is that we must use the Jam language. However, it's really not so bad. While I would prefer Python, the Jam language is consistent and very simple. The main things to remember (this is my mental model and may not be technically accurate):


  • Rules are the same as functions in other languages

  • Parameters to functions are separated by ":"

  • All tokens are white space separated (use quotes to embed white space)

  • Results of functions can be used by enclosing the function call in a [] pair

  • Comments start with # and go to the end of the line



Here is an extremely simple example of a rule/function (create a file called "Jamroot" in the current directory and put in the following):

rule show-list ( list-of-stuff + : sep ) #1
{
    for local l in $(list-of-stuff) #2
    {
        echo $(l) $(sep) ; #3
    }
    return "Hello, World" ;
}

echo [ show-list 1 2 3 : "|" ] ; #4


  1. This line declares a new rule called "show-list" which accepts two parameters: a list as the first parameter and a single value as the second. Note the "+" modifier on the first parameter. This indicates to the build tool that at least one parameter is expected. You can use "*" to indicate 0 or more. I believe this can also be used to indicate optional parameters

  2. This line is a for loop using a local variable. Note the variable expansion using the "$()" syntax. In this case, each iteration of the loop will expand to an element of the list in list-of-stuff.

  3. Here we call the echo rule. Note that the line is terminated by the ";" symbol. This is required!

  4. Finally, we call the new rule with a list as the first parameter and a keyword enclosed in quotes as the second parameter. We use the result of that rule and pass it to echo.


If you execute "bjam", the output looks something like:

1 |
2 |
3 |
Hello, World

Pretty boring!

Creating a new project


When Boost Jam is invoked, it looks for a file called "Jamroot" in the current directory or in one of the parents of the current directory. This is where you define project global settings. Let's do that now. Create a new file called Jamroot and include the following contents:

import toolset ;

project app
  : requirements
    <threading>multi
    <link>static
    <warnings>all
    <warnings-as-errors>on
    # Equivalent to <toolset>darwin: <architecture>x86 <toolset>darwin: <address-model>32
    [ conditional <toolset>darwin: <architecture>x86 <address-model>32 ]

  : default-build debug release
  : build-dir build
;

The requirements state that all artifacts should be built using multi-threaded libraries, built statically with all warnings and all errors. Additionally, on darwin, we only want to build 32-bit executables for the x86 architecture.

In the default build (when you type just bjam), both debug and release variants will be built and put into the build directory relative to the Jamroot.

Now, type "bjam". If you are on OSX, you may see something like the following:

$ bjam
warning: No toolsets are configured.
warning: Configuring default toolset "gcc".
warning: If the default is wrong, your build may not work correctly.
warning: Use the "toolset=xxxxx" option to override our guess.
warning: For more configuration options, please consult
warning: http://boost.org/boost-build2/doc/html/bbv2/advanced/configuration.html
...found 1 target...

The reason for this is that BB guesses the toolset but on OSX we should really be using the darwin toolset. When Boost Jam starts up, it looks at ~/user-config.jam (somewhere similar on Windows) for a user configuration file. Add one with the following contents:

# ~/user-config.jam
using darwin ;

Now when you hit bjam, you should see something like:

$ bjam
...found 1 target...

Alternatively, if you don't want to pollute your file system, you can execute:

$ bjam toolset=darwin
...found 1 target...

Adding a project target


Now we will add a simple executable that links to some Boost libraries. Create a directory named "app" in your current directory and create a file in this directory named "Jamfile" with the following contents:

# app/Jamfile
exe app : [ glob *.cpp ] ;

Additionally, create a C++ file in the app directory with a trivial main function and the ".cpp" extension. Execute bjam. Nothing changed! That's because we haven't asked BJam to build our project. Try executing "bjam app". You should see something like the following:

$ bjam toolset=darwin app
...found 20 targets...
...updating 17 targets...
common.mkdir build
common.mkdir build/app
common.mkdir build/app/darwin-4.0.1
....
darwin.compile.c++ build/app/darwin-4.0.1/debug/address-model-32/architecture-x86/link-static/threading-multi/main.o
darwin.link build/app/darwin-4.0.1/debug/address-model-32/architecture-x86/link-static/threading-multi/app
common.mkdir build/app/darwin-4.0.1/release
...
darwin.compile.c++ build/app/darwin-4.0.1/release/address-model-32/architecture-x86/link-static/threading-multi/main.o
darwin.link build/app/darwin-4.0.1/release/address-model-32/architecture-x86/link-static/threading-multi/app
...updated 17 targets...

Note that both debug and release builds were created with one invocation. To restrict to one or the other, execute "bjam variant=debug" or "bjam variant=release".

Using Boost


Remember when we downloaded Boost? Now we will use it! The mechanism for using another Boost Jam project is the "use-project" rule. Add the following to your Jamroot file:

use-project /boost : ../boost_1_41_0 ;
alias boost_thread
  : /boost/thread//boost_thread
  : <warnings-as-errors>>off # bunch of warnings

Here we told the build system where the project with the id "/boost" is located. In this case, it is ../boost_1_41_0, relative to the Jamroot file. Yours might be different. Additionally, we added an alias for the Boost thread library. The main reason for this is that a single alias reduces proliferation of any special handling needed.

If you type "bjam" now, you will not be surprised that nothing is being built. Let's fix that now. Add the following to the Jamroot file:

build-project app ;

Now, whenever you invoke bjam, the app project will always be built. Invoke bjam now. You should notice that Boost thread is not being built. Again, this is not surprising. We aren't using it anywhere! Modify app/Jamfile to look like the following and execute bjam:

exe app : [ glob *.cpp ] ..//boost_thread ;

You will notice that Boost thread is being built in the Boost directory. This is not very useful as build artifacts are spread all over your disk. (Un?)Fortunately, there is a hack to making this work. Create a new file at the level of your Jamroot named "boost-build.jam". Fill it with the following contents:

# Add --build-dir to command line so that boost Jamfiles pick it up and use this directory to build.

ARGV += --build-dir=build ;

BOOST_ROOT = vendor/boost ;
BOOST_BUILD = $(BOOST_ROOT)/tools/build/v2 ;
boost-build $(BOOST_BUILD) ;

Boost Build looks for this file when building Boost (I think) so here we add the --build-dir parameter so that when building boost, it will build to our build directory. That's a lotta building ;-)

Hit "bjam" now. You should see Boost thread being built statically in the build directory now, in both debug and release variants.

Conclusion

In this post, you learned how to build Boost Jam, a little bit about the Jam language and created a simple project utilizing the Boost libraries. Next time, I will build on this post to cover making a plugin-aware C++ application (this is really quite exciting for me!) Again, if you have any question or comments, feel free to leave them below.