We've reached the end of the free lunch, processing power and other computing metrics have stopped following Moores law, more cores are being added but they're not actually faster, there are just more of them.
This makes parallel programming all the more important, as we add more cores that serial programming just leaves unused. This makes the work Microsoft have done to make parallel programming easier with the Task Parallel Library (TPL) in .NET 4.
Ade illustrated the problem with a Stock market analysis program.
He pointed out that there are a number of things that block parallelism in your code.
You have to consider IO read/write constraints, actions that have fixed dependencies that impose a required order, that you're targetting items that take most time.
The TPL gives you Factory.StartNew, ContinueWith (To pass data from another task), ContinueWhenAll lets you specify a number of other tasks, ParallelFor lets you iterate in parallel (though you lose the index).
Ramblings
Think about Tasks vs Data, Control Flow, Control and Data Flow
When dividing up parallelisable tasks the size of the data chunks has issues where if you make it too big you underutilise whereas too small you thrash.
Using the ThreadPool is making use of the scheduler that focuses on making use of the available resources, the TPL uses a scheduler itself as seen with ContueWhen.
It's important to check that the tasks taking the time are the ones you're targeting to parallelise. Making sure there's business value that's focussed on, don't do it for the sake of it.
With Parallel.For because you can't do index incrementing parallel code, you have to factor that into the code you run.
Calculating totals for example requires an overloaded Parallel.For where each method calculates a subtotal and the grand total is calculated afterwards with a lock on the value being added to.
Parallel tasks can't really use shared state, you get locking issues so when shariing state, if possible don't share at all, use read only data, synchronise afterwards.
You can consider running parallel tasks to see which the fastest result to come back, then using Cancel and Task.WaitAll to cancel the other running tasks.
BlockingCollection - Reading in a file line by line, adding the lines to the blocking collection such that the processing of each item can be parallel where the collection can't be.
Don't just rewrite your loops, figure out what your users want and what's actually taking the time.
Architect for Parallelism.
We're increasingly designing our systems to accommodate latency which is an increasing trend now our code is running in the cloud. RX is a handy tool for this..
RX is a library for composing asynchronous and event-based programs using observable collections. Systems where data is constantly coming in and it needs to be reacted to.
You can do this already, but this is currently really hard. We tend to work with collections in a pull based model, iterating - give me the next, which is blocking. This can mean locked programs that become non responsive.
The key interfaces to make use of it are IObservable + IObserver which give you Subscribe and OnNext, OnError, OnComplete meaning you're reacting to, not controlling the flow.
It's reactive rather than interactive.
Perception is reality, users faced with hangs and slow updating data, believe that's what the app is.
You need to be aware that if you're not using a true observable , you're not going to see the asynchronous behaviour. For example Enumerable.Range(0, 3).ToObservable() would be running synchronously.
RX is available in WPF, Silverlight Toolkit and WP7.
You can choose what thread to observe on, for example observing a textbox keyup event.
Throttling by time, filtering of which events to observe. Google Translate example.
They're working on new Async features that work with the RX framework and interestingly LINQ to Objects in JavaScript..
A whistle stop tour of the new features of MVC3. New pluggable view engines like Razor, HTML 5 default templates, integration with jQuery validate, Nuget (automatic installation and dependency filling), DI and IOC injection hooks, global filters and more granular request validation (so individual properties can be marked as ValidateInput(false) to allow HTML content through) and a bunch more.
A search for MVC 3 will give the same info as I would, so I won't enumerate the features in any detail here. There will certainly be some useful additions to our processes though.
An unfortunate choice for me, the speaker Kyle Maxwell was fighting his nerves the whole way through which made it quite excruciating to watch (and frankly dull).
He talked about briefly and somewhat incoherently about various sub systems they use, a queuing system, daemons used to spray messages to data shards. The different parts seem to be composed of entirely different languages (scala, ruby and some others I'd never heard of).
All together a disappointing session, I'd hoped to get some takeaways from it and didn't take anything away.
Henrik made a bunch of points which are covered in the free book he released [Insert url for PDF] (It requires registration but nothing more). As a result, I'll just describe one example he used to show how limiting work in progress is a productivity win.
After getting a volunteer from the audience, time how long it took to write a name (4 seconds), he asked for 4 more people to come up on stage.
He then asked for a time estimate for writing the 4 names, (20 seconds).
With that he put the obstacle of not limiting work in progress, all the names had to be written one letter at a time. It took almost 2 minutes! As a German being told Swedish names this added to the time taken.
This was a somewhat contrived example but still illustrates that context switching and working with unlimited work in progress can be a real issue.
Scale that up to normal jobs where in a single day you have to context switch 3/4 times a day and you can see that this example bears out in real life.