TLA311 The Future of C#
C# 1.0 gave us the ability to run managed code, C# 2.0 gave us generics, C# 3.0 gave us LINQ.
There's a strong drive towards dynamic languages and declarative programming at the moment. Declarative programming is about the what, not the how. So with LINQ, we say filter these results on Id, how it does that you're abstracted from.
Dynamic languages - simple and succinct, implicitly typed, meta-programming, no compilation
Static languages - robust, performant, intelligent tools, better scaling
C# 4.0 is enabling greater dynamic features.
Concurrency is a big challenge, we've hit a wall with clock speed so we're now going multi core processors. So in order to make use of these, you have to be able to have code run in parallel and asyncronously.
Microsoft are pushing a co-evolution with the languages that run on .NET so the features that c# has and VB doesn't (and vice versa) are being developed in parallel now, they should be much more the same.
Dynamic programming, DLR (Dynamic Language Runtime) allows a bridging between the static CLR and the dynamic world. Uses expression trees, dynamic dispatch, call site caching.
IronRuby, IronPython, C# and VB can all run against the DLR and allop interop between them.
Static type called dynamic allows dynamic calc - GetCalc(); int sum = calc.Invoke("Add", 10, 20) the actual type is deferred until runtime where the compiler will substitute the dynamic types for the actual types. The trade off is type safety but are more for situations where you don't have type safety anyway.
Javascript methods declared on the page can be called from the c# code by declaring an HtmlWindow.AsDynamic and calling window.MyJSMethod();
Moving Javascript to use dynamics in C# means you don't need to code in js, though you lose all intellisense because we don't have types to interrogate.
Optional and named parameters, currently we use a lot of method overloading OpenFile(string path, bool throwExceptionIfNotFound), OpenFile(string path) with optional parameters you can just use the first there with bool throw = true, setting a default if nothing is passed. Named parameters enable you to go OpenFile(throw: false).
Improvements with COM where before where there are optional parameters now rather than saying OpenFile("myfile.txt", Missing.Value, Missing.Value) because we don't have optional parameters currently where COM does, now we can just call OpenFile("myfile.txt").
Improvements also with co and contra-variance. Arrays are covariant, but not safely (can add a button into a string array of objects. Also you can now pass objects into methods based on base types.
This allows you to do this: public void ProcessCustomers(List
stuff) and pass a List
for example.
C# evaluator which is scoped so you can declare functions and use them in other evaluations. It allows you to do things such as declare and run code at runtime. Useful for things like plugins and even for secure remote debuggers.
PDC309 Designing applications for Windows Azure
We've seen a few more conceptual and technology sessions on Azure during the week but have yet to see any real life business use, that's what I hoped to get out of this and wasn't disappointed.
He started with what Azure is not, it's not a hosted windows server, or a sql server that you have access to. He gave things not to do and things you can't do.
He used the business example of the RNLI who monitor 10000 vessels and want to extend their safety of life services to the entire UK marine leisure market clearly to build in the redundancy would be a huge infrastructure headache. With Azure, they need only increase the number of running instances to match demand.
Working with queued items when you pull it off the queue, it hides it from other workers with a default timeout so if you don't finish working with an item on the queue (worker process fail for example) it'll get put back on for another worker to access.
Because of the parallel processing model, you need to write your code to expect failures at any point of inserting, deleting, processing etc in the case of a partial failure. Although not active in the CTP, in order to allow Azure to know whether to spin up another instance (in a non catastrophic failure scenario), you can override the health status. You need to forget ultimate accuracy though, because processes can fail after you're 80% though your app logic for example. Poisoned messages (things added to the queue that when processed cause the process to fail) need to be handled by us otherwise they'll keep being run.
The key message was that failure is guaranteed, deal with it. Compensate for errors, don't rely on state.
Useful information was that there are always at least 3 duplicates of your data for robustness.
Some best practices for scalability were shown with tables such as using the partition key as a point of scale, if you imagine every partition is stored on a separate server you can see how this scales well. Row key is there for performance with queries.
A really good talk that showed some really good best practices, much needed after the more abstract hello world examples we'd seen elsewhere.
DVP315 Dos and don'ts in Silverlight 2 Applications
Another useful best practice session. 75 minutes of what to be aware of, avoid and make sure you do.
For user experience, don't use the default Silverlight installer (a white image with a single "Install Silverlight" button in the middle) put your own in and point to it. This is a great place to show an image of what the user can expect when they've installed your app.
Don't load your application all at once, only load what you need to start the application, otherwise you make the user wait. You can download other resources to the user's isolated storage as a cache then load them next time they run it.
If they do have to wait, customise the wait screen by creating your own and pointing the onSourceDownload event at your progress bar update method.
There were a lot of performance tips:
Don't use transparent backgrounds, match the website colour. Don't animate text, make text into vectors and animate that, though don't animate paths if possible these are both expensive. Dispose of invisible controls, once you've animated to an opacity of 0, set vibility to collapsed if needed or otherwise remove them. They're never garbage collected until the references are removed. Use background workers and async webservice calls. Use GC.GetTotalMemory(true) to find out the current memory usage of your appliction. That way you can create a memory usage displaying control for debug. Use EnableRedrawRegions and EnableFrameRateCounter to give you a good view of how your interaction impacts performance. Don't set the width and height of media elements, provide alternate sized media, the media actually gets recoded, blended for transparency etc on resize.
Other tips were:
Hide controls that can't be interacted with such as textboxes in fullscreen mode where you don't have keyboard access.
Help designers out by outputting mock content for controls only shown if in debug mode using a method such as preprocessors (#if debug)