1 2 3 4 5 6
Elegant Code vs.(?) Clean Code

A developer on the Microsoft C# compiler team recently made a post asking readers to post their solutions to a programming exercise in Comma Quibbling by Eric Lippert. The requirements are as follows:

  1. If the sequence is empty then the resulting string is "".
  2. If the sequence is a single item "ABC" then the resulting string is "".
  3. If the sequence is the two item sequence "ABC", "DEF" then the resulting string is "".
  4. If the sequence has more than two items, say, "ABC", "DEF", "G", "H" then the resulting string is "{ABC, DEF, G and H}". (Note: no Oxford comma!)

On top of that, he stipulated "I am particularly interested in solutions which make the semantics of the code very clear to the code maintainer."

Before doing anything else, let's nail down the specification above with some tests, using the NUnit testing framework:

public class SentenceComposerTests
  public void TestZero()
    var parts = new string[0];
    var result = parts.ConcatenateWithAnd();

    Assert.AreEqual("{}", result);

  public void TestOne()
    var parts = new[] { "one" };
    var result = parts.ConcatenateWithAnd();

    Assert.AreEqual("{one}", result);

  public void TestTwo()
    var parts = new[] { "one", "two" };
    var result = parts.ConcatenateWithAnd();

    Assert.AreEqual("{one and two}", result);

  public void TestThree()
    var parts = new[] { "one", "two", "three" };
    var result = parts.ConcatenateWithAnd();

    Assert.AreEqual("{one, two and three}", result);

  public void TestTen()
    var parts = new[] { "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten" };
    var result = parts.ConcatenateWithAnd();

    Assert.AreEqual("{one, two, three, four, five, six, seven, eight, nine and ten}", result);

The tests assume that the method ConcatenateWithAnd() is declared as an extension method. With the tests written, I figured I'd take a crack at the solution, keeping the last condition foremost in my mind instead of compactness, elegance or cleverness (as often predominate). Instead, I wanted to make the special cases given in the specification as clear as possible in the code. On top of that, I added the following conditions to the implementation:

  1. Do not create a list or array out of the enumerator. That is, do not invoke any operation that would involve reading the entire contents of the enumerator at once (e.g. the extension methods Count() or Last() are verboten).
  2. Avoid comments; instead, make the code comment itself.
  3. Make the code as clearly efficient as possible without invoking any potentially costly library routines whose asymptotic order is unknown.

That said, here's my version:

public static string ConcatenateWithAnd(this IEnumerable<string> words)
  var enumerator = words.GetEnumerator();

  if (!enumerator.MoveNext())
    return "{}";

  var firstItem = enumerator.Current;

  if (!enumerator.MoveNext())
    return "{" + firstItem + "}";

  var secondItem = enumerator.Current;

  if (!enumerator.MoveNext())
    return "{" + firstItem + " and " + secondItem + "}";

  var builder = new StringBuilder("{");
  builder.Append(", ");

  var item = enumerator.Current;

  while (enumerator.MoveNext())
    builder.Append(", ");
    item = enumerator.Current;

  builder.Append(" and ");

  return builder.ToString();

Looking at this from a maintenance or understanding point-of-view, I have the following notes:

  • More novice users will probably not immediately grasp the use of the enumerator. Though it's part of the .NET library, its use is usually hidden by the syntactic sugar of the foreach-statement.
  • The formatting instructions for curly brackets and separators are included several times, which decreases maintainability should the output specification change.
  • The multiple calls to the string-concatenation operator and to StringBuilder.Append() are intentional. I wanted to avoid having to use escaped {} in the format string (e.g. String.Format("{{{0} and {1}}}", firstItem, secondItem) is confusing if you're not aware how curly brackets are escaped in a format string).

Other than those things, it seems relatively compact and efficient. With my own version written, I looked through the comments on the post to see if any other interesting solutions were available. I came up with two that caught my eye, one by Jon Skeet and another by Hresto Deshev, who submitted his in F#.

Hristo's example in F# is as follows:

let format (words:list<string>) =
   let rec makeList (words: list<string>) =
       match words with
           | [] -> ""
           | first :: [] -> first
           | first :: second :: [] -> first + " and " + second
           | first :: second :: rest -> first + ", " + second + ", " + (makeList rest)
   "{" + (makeList words) + "}"

That's so cool: the formulation in F# is almost plain English! That's pretty damned maintainable, I'd say. I have no way of judging the performance of this just-in-time parsing, but it does make use of recursion: lists with thousands of items will incur thousands of nested calls.

Next up is Jon Skeet's version in C#:

public static string JonSkeetVersion(this IEnumerable<string> words)
  var builder = new StringBuilder("{");
  string last = null;
  string penultimate = null;
  foreach (string word in words)
    // Shuffle existing words down
    if (penultimate != null)
      builder.Append(", ");
    penultimate = last;
    last = word;
  if (penultimate != null)
    builder.Append(" and ");
  if (last != null)
  return builder.ToString();

This one is very clever and handles all cases in a single loop rather than addressing special cases outside of a loop (as mine did). Also, all of the formatting elements -- the curly brackets and item separators -- are mentioned only once, improving maintainability. I immediately liked it better than my own solution from a technical standpoint. While I'm drawn to the cleverness and elegance of the solution, I'm not the target audience. Skeet's version forces you to reason out the special cases; it's not immediately obvious how the special cases for zero, one and two elements are handled. Also, while I am tickled pink by the aptness of the variable name penultimate, I wonder how many non-native English speakers would understand its intent without a visit to an online dictionary. The name secondToLast would have been a better, though far less sexy, choice.

It's very easy to underestimate how little people are willing to actually read code that they didn't write. If the code requires a certain amount of study to understand, then they may just leave it well enough alone and seek the original developer. If, however, it looks quite easy and the special cases are made clear -- as in my version -- they are far more likely to dig further and work with it. Since the problem is defined as a three special cases and a general case, it is probably best to offer a solution where these cases are immediately obvious to ease maintainability -- and as long as you don't sacrifice performance unnecessarily. Cleverness is wonderful, but you may end up severely limiting the number of people willing -- or able -- to work on that code.

Encodo's Development Environment

For the software developers in our audience, we've put together a list of the most essential .Net-tools that we use daily and without which we wouldn't want to have to work.

Visual Studio 2008

Many months ago, we moved our entire .Net development to Visual Studio 2008. VS2008 supports .Net 2.0, 3.0 and 3.5 proejcts, which made the transition both quick and easy. Given the choice, we use .Net 3.5.1 as we've grown quite attached to the new language features like Linq, lambda-expressions and so on and wouldn't want to do without them anymore.

Resharper (R#) and Agent Smith

The ReSharper addon developed by JetBrains quickly revealed itself as an extremely useful Visual Studio addon. Its main strength is recommending ways to clean up and improve source code but it also includes an excellent unit-testing client (NUnit-compatible), enhanced code-navigation, analysis tools and much more.

ReSharper also supports plugins of its own and we've installed Agent Smith, which performs spell-checking and enforces other naming and coding conventions.


GhostDoc is a freeware Visual Studio addon that enhances the source-code documentation if Visual Studio. It not only generates a complete documentation scaffolding based on method and property signatures, but also fills in the text with actual documentation that can often be used as-is.


DPack is also a freeware Visual Studio addon that includes several useful functions. We use it primarily for the two functions "File Browser" and "Code Browser", both of which improve code navigation speed. You use the file browser to find a file in the solution by typing a part of its name, then you use the code browser to search for an identifier within that file (matching classes, methods, properties, fields, interfaces and so on) and jump to it by hitting enter.

Encodo Perforce Plugin

Perforce has been Encodo's version control system (VCS) of choice for a while now, but the Visual Studio addon provided by Perforce themselves isn't very good, so we decided to make our own addon (implemented as a so-called Source Control Package). After a minimum of development time, we've got an addon that we've been using for months both in the office and remotely and that fully integrates with Visual Studio.

Developer Express Components

The UI components provided with the .Net-framework are good, but we prefer the much more powerful components available from Developer Express for both our Windows and web applications. Components range from simpler components like toolbars, ribbons, grids and treelists to more advanced components like full schedulers as well as reporting & printing. All components support full skinning.

Other component libraries

Though we use Developer Express almost exclusively, we've also worked with components from Telerik and ComponentArt. Naturally, though all three providers have extremely powerful components, they each have their strengths and weaknesses.


And lastly we have one of the newer tools that we find ourselves using more and more lately: Jing. Jing is an application that both takes screenshots and records screencasts. It can store recordings either locally or publish them directly to screencast.com under your user account. This tools is extremely helpful for both writing documentation and providing support (e.g. showing a user how to do something), easy to use and free.

That should give you a good overview of the most important tools that we currently use at Encodo. Of course, every workstation has numerous other small tools that help us get our jobs done; if you want to know more, feel free to ask!

Encodo Entwicklungsumgebung

Für die Software-Entwickler unter den Lesern haben wir in diesem Artikel die wichtigsten .Net-Tools zusammen getragen, mit welchen wir täglich arbeiten und welche uns "ans Herz" gewachsen sind.

Visual Studio 2008

Unsere gesamte .Net-Entwicklung haben wir seit einigen Monaten auf Visual Studio 2008 (VS2008) umgestellt. Mit VS2008 können wir sowohl .Net 2.0, 3.0 und 3.5 Projekte realisieren, was die Umstellung sehr vereinfachte und beschleunigte. Wenn wir wählen können, verwenden wir .Net 3.5.1, denn wir haben uns sehr an die neuen Sprachfeatures wie Linq, Lambda, etc. gewöhnt und wollen diese nicht mehr missen.

Resharper (R#) und Agent Smith

Resharper von JetBrains hat sich bei uns als äusserst wertvolles Visual Studio-Addon herausgestellt. Neben Verbesserungsvorschlägen für den Sourcecode beinhaltet es einen mächtigen Unit-Testing-Client (NUnit-kompatibel), Code-Navigations- sowie Analysewerkzeuge, uvm.

Als Resharper-Plugin haben wir AgentSmith installiert. Dieses Plugin übernimmt die Rechtschreibprüfung für Sourcecode, Strings sowie Kommentare.


Bei GhostDoc handelt es sich um ein freies Visual Studio-Addon welches die Sourcecode-Dokumentationsfunktionalität von Visual Studio aufmotzt. Einerseits generiert es komplettere XML-Dokugerüste und andererseits bildet es auf weiten Strecken schon prima Dokumentionsvorschläge anhand Methodennamen und Parametern.


Auch DPack ist ein freies Visual Studio-Addon welches selbiges um einige nützliche Funktionen erweitert. Wir bei Encodo verwenden primär die beiden Funktionen "File-Browser" und "Code-Browser". Beide dienen der schnellen Navigation im Sourcecode. Ersteres findet und öffnet solutionübergreifend Dateien bei Eingabe eines Teils des Dateinames, während das zweite innerhalb der aktuellen Datei nach dem eingegebenen Element (Klasse, Methode, Property, Feld, Interface, etc.) sucht und auf dieses springt.

Encodo Perforce Plugin

Da Encodo schon seit längerem mit dem Versionskontrollsystem (VCS) namens Perforce arbeitet, musste dafür auch eine entsprechende Visual Studio Integration her. Da das Addon des Herstellers nicht wirklich ausgereift ist, haben wir uns entschlossen, unser eigenes VS-Addon (implementiert als sog. Source Control Package) zu entwickeln und setzen dies ebenfalls seit einigen Monaten erfolgreich ein.

Developer Express Komponenten

Da wo die UI-Komponenten des .Net-Frameworks aufhören, verwenden wir sowohl für Windows- wie auch Web-Programme die mächtigen Komponenten von Developer Express. Beispielsweise für Toolbars & Ribbons, Grids, TreeLists, Schedulers oder auch Printing & Reporting sowie Skinning.

Alternative Komponentensammlungen

Alternativ zu den Komponenten von Developer Express haben wir auch schon mit denen Telerik und ComponentArt gearbeitet. Alle drei Anbieter haben äusserst mächtige Komponenten wobei jede seine Stärken und Schwächen hat.


An dieser Stelle noch eines der neusten Tools, welches wir immer mehr einsetzen: Jing. Jing ist eine Software welche das schiessen von Screenshots und das aufzeichnen von Screencasts ermöglicht. Es kann gemachte Aufnahmen lokal speichern oder direkt auf screencast.com publizieren. Das Tool ist für den Support und die Dokumentation äusserst hilfreich, einfach zu bedienen und kostenlos.

Dies sind so in Kürze die wichtigsten Tools, welche wir aktuell einsetzen. Es gibt noch zahlreiche weitere kleine Tools auf den jeweiligen Arbeitsplätzen installiert. Wer es genauer wissen kann uns gerne Fragen.

Entity Framework: Be Prepared

In August of 2008, Microsoft released the first service pack (SP1) for Visual Studio 2008. It included the first version (1.0) of Microsoft's generalized ORM, the Entity Framework. We at Encodo were quite interested as we've had a lot of experience with ORMs, having worked on several of them over the years. The first was a framework written in Delphi Pascal that included a sophisticated ORM with support for multiple back-ends (Sql Server, SQLAnywhere and others). In between, we used Hibernate for several projects in Java, but moved on quickly enough.1 Most recently, we've developed Quino in C# and .NET, with which we've developed quite a few WinForms and web projects. Though we're very happy with Quino, we're also quite interested in the sophisticated integration with LINQ and multiple database back-ends offered by the Entity Framework. Given that, two of our more recent projects are being written with the Entity Framework, keeping an eye out for how we could integrate the experience with the advantages of Quino.2

What follows are first impressions acquired while building the data layer for one of our projects. The database model has about 50 tables, is highly normalized and is pretty straightforward with auto-incremented integer primary keys everywhere, and single-field foreign keys and constraints as expected. Cascaded deletes are set for many tables, but there are no views, triggers or stored procedures (yet).

Eventually, EF will map your model and the runtime performs admirably (so far). However, designing that model is not without its quirks:

  • Be prepared to have minor changes in the database result in a dozen errors on the mapping side.
  • Be prepared for error messages so cryptic, you'll think the C++ template compiler programmers had some free time on their hands.
  • Be prepared to edit XML by hand when the designer abandons you; you'll sometimes have to delete swathes of your XML in order to get the designer to open again.
  • Be prepared to wait quite a while for the designer to refresh itself once the model has gotten larger.
  • Be prepared to regularly restart Visual Studio 2008 once your model has gotten bigger; updating the model from the database even for a minor change either takes a wholly unacceptable amount of time or sends the IDE into limbo; either way, you're going to have to restart it.

To be fair, this is a 1.0 release; it is to be expected that there are some wrinkles to iron out. However, one of the wrinkles is that a model with 50 tables is considered "large".

Large Model Support

With 50 tables and the designer slowing down, you're forced to at least consider options for splitting the model. The graphic below shows the model for our application:


There exists official support for splitting the model into logical modules, but it's just a bit complex; that is, you have to significantly change the generated files in order to get it to work and there is no design-time support whatsoever for indicating which entities belong to which modules. The blog posts by a member of the ADO.NET team called Working With Large Models In Entity Framework (Part 1 and Part 2) offer instructions for how to do this, but you'll have to satisfy one of the following conditions:

  1. You don't understand how bad it is to alter generated code because you'll just have to do it again when the database model has changed.
  2. You're new to this whole programming thing and aren't sufficiently terrified by a runtime-only, basically unsupported feature in a version 1.0 framework
  3. Development with a single model has gotten so painful that you have to bite the bullet and go for it.
  4. Condition (3) plus you have enough money and/or time in the budget to build a toolset that applies your modularization as a separate build-step after EF has generated its classes.3

The low-level, runtime-only solution offered by the ADO.NET team ostensibly works, though it probably isn't very well-tested at all. Designer and better runtime-integration would be key in supporting larger models, but the comments at the second blog post indicate that designer support likely won't make version 2 of the Entity Framework. This is a shocking admission, as it means that EF won't scale on the development side for at least two more versions.

The Designer

The designer is probably the weakest element of the Entity Framework; it is quite slow and requires a lot of work with the right mouse-button.

  • Once you've got a lot of entities, you'll want to collapse all of them in the diagram and redo the automatic layout so that you can see your model better. This takes much longer than you would think it should. Once you've collapsed a class, however, error messages no longer jump to the diagram, so you'll have to re-expand them all in order to figure out in which entity an error occurred.
  • "Show in Designer" doesn't work with collapsed classes; even when classes are expanded it's extremely difficult to determine which class is selected because the selection rectangle gets lost in the tangle of lines that indicate relationships.
  • Simply selecting a navigation property and pressing Delete does nothing; you have to select each property individually, right-click and then Select Association. The association is highlighted, albeit very faintly, after which you can press Delete to get rid of it.

If you're right in the middle of a desperate action to avoid reverting to the last version of your model from source control, you'll be pleased to discover that, sometimes, Visual Studio will prevent you from opening the model, either in visual- or XML-editing mode. Neither a double-click in the tree nor explicitly selecting "Open" from the shortcut menu will open the file. The only thing for it is to re-open the solution, but at least you don't lose any changes.

Synchronizing with the Database

The biggest time-sink in EF is the questionable synchronization with the database. Often, you will be required to intervene and "help" EF figure out how to synchronize -- usually by deleting chunks of XML and letting it re-create them.

  • You may think that removing a table from the mapping will let you re-import it in its entirety from the database. This is not the case; be prepared to remove the last traces of that table name from the XML before you can re-import it.
  • Be prepared for duplicated properties when the mapper doesn't recognize a changed property and establishes the close sibling "property1".
  • Be prepared for EF to get confused when you've made changes; the bigger your model is, the more likely this is to happen and the less helpful the error messages. They will refer to errors on line numbers 6345 and 9124 and half the file, when opened as XML, will be underlined in wavy blue to indicate a compile error.
  • Be prepared to delete associations between entities that have changed on the database (e.g. have become nullable) because the EF updater does not notice the change and update the relationship type from 1 (One) to 0..1 (Zero or One).
  • If you have objects in the database that do not match the constraints of the database model, they cannot even be loaded by EF. (e.g. if as above, you've made a property nullable in the database, but the EF model still thinks it cannot be nullable).

Here's a development note written after making minor changes to the database:

I added a couple of relationships between existing tables and there were suddenly 17 compile errors. I desperately tried to delete those relationships from the editor, to no avail. I opened it as XML and started deleting the affected sections in the hopes that I would be able to compile again and re-sync with the database. After a few edits, the editor would no longer open and the list of errors was getting longer as the infection spread; I would have to cut out the cancer. The cancer, in this case, was all of the classes involved in the new relationships. Luckily, they were mostly quite small and mostly used the identifiers from the database.4 Once the model compiled again (the code did not build because it depended on generated code that was no longer generated), I could open the editor and re-sync with the database. Now it worked and had no more problems. All this without touching the database, which places the blame squarely on EF and its tendency to get confused.

As you can imagine, adventures like these can take quite a bit of time and break up the development flow considerably.

Initalization of Dates

The problem with dates all starts with this error message:

SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.

Be prepared to guess which of your several DateTime fields is causing the error because the error message doesn't mention the field name. Or the class name either, if you've had the audacity to add several different types of objects -- or, God forbid, a whole tree of objects -- before calling SaveChanges().

This error may come as a surprise because you've actually set default values in the database for all non-nullable date-time fields. Unfortunately, the EF schema reader does not synchronize non-scalar default values, so the default value of getdate() set in the database is not automatically included in the model. Since the entity model doesn't know that the database has a default value, but it does know that the field cannot be null, it requires a value. If you don't provide a value, the mapper automatically assigns DateTime.MinValue. The database does not accept this value, so we have to set it ourselves, even though we've already set the desired default on the database.

To add insult to injury, the designer does not allow non-scalar values (e.g. you can't set DateTime.Now in the property editor), so you have to set non-scalar defaults in the constructors that you'll declare by hand in the partial classes for all EF objects with dates5.

In order to figure out which date-time is causing a problem once you think you've set them all, your best bet is to debug the Microsoft sources so you can see where ADO.NET is throwing the SqlClientException. The SQL Profiler is unfortunately no use because the validation errors occur before the command is sent to the database. To keep things interesting, the Entity Framework sources are not available yet.

Using Transactions

The documentation recommends using ScopeTransactions, which use the DTS (Distributed Transactions Services). If the database is running locally, you should have no troubles; at the most, you'll have to start the DTC6 services. If the database is on a remote server, then you'll need to do the following:

  1. Enable remote network access for the MSDTC by opening the firewall for that application and opening access in that application itself; see Troubleshooting Problems with MSDTC for more information. The quickest solution is to simply use non-authenticated communication for development servers on a closed network.
  2. If that doesn't work, then you'll have to dig deeper into your firewall problems. How do you know when you're an enterprise developer? When you have to read through a manual like the following: How to troubleshoot MS DTC firewall issues.

Any troubles you may experience with the DTC are unrelated to EF development; they're just the pain of working with highly-integrated and security-aware software. That's not to say that the experience is pleasant when something is mis-configured, but that I am reserving judgment until a later point in time.

Common Error Messages

The following section includes solutions for specific errors that crop up more often during EF model development.

Mapping Fragments...

Error 1 Error 3007: Problem in Mapping Fragments starting at lines 1383, 1617: Non-Primary-Key column(s) [ColumnName] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified.

You have most likely mapped the property identified by ColumnName as both a scalar and navigational property. This usually happens in the following situation:

  1. Add a foreign-key property on the database (do not create a constraint).
  2. Update the entity model from the database; a scalar property is added to your model.
  3. Add the foreign-key constraint to the database.
  4. Update the entity model from the database; a naviagational property is added to your model, but the scalar property is not removed.

To fix the conflict, simply remove the scalar property manually.

Cardinality Constraints...

A relationship is being added or deleted from an AssociationSet FK_ItemChildren_Item. With cardinality constraints, a corresponding ItemChildren must also be added or deleted.

You have most likely created a cascading relationship in the database and the EF editor has failed to properly update the model. It seems that there is no way to determine from the designer whether or not an association has delete or update rules. According to the blog post, Cascade delete in Entity Framework, the designer sometimes fails to update the association in both the physical and entity mappings in the XML file, so you have to add the rule by hand. See the article for instructions.


The database-design phase is more difficult than it should be, but it is navigable. You end up with a very usable, generated set of classes which nicely integrate with data-binding on controls. We will soldier on and bring news of our experiences on the runtime front.

  1. The blog post, Versioned Objects with Hibernate, illustrates one example why.

  2. To be addressed in a future post on Quino and the Entity Framework, when nascent plans for integration have become clearer.

  3. This is the most attractive of the options.

  4. When EF imports from the database, it uses the names from the database. You'll generally adjust these identifiers to "pluralize" them; when you have to delete swathes of code because EF can't synchronize, you'll throw away these customized identifiers and have to update them again.

  5. If your objects tend to have timeCreated/timeModified fields, you've got a lot of work ahead of you.

  6. That's not a typo, support for distributed transactions is provided by the DTS -- the Distributed Transaction Services. However, the actual Windows service is called the DTC -- the Distributed Transaction Controller.

ASP.Net DataBinding und Templates

Ab ASP.Net 2.0 gibt es in einigen Server-Controls sog. Templates. Mittels Templates können Teile des Inhaltes von Server-Controls definiert werden, welche vom Server-Control nach Bedarf eingesetzten werden. Beispielsweise definiert man in einer DataList den HTML-Code für ein einzelnes Item bzw. Datensatz. Das Server-Control verwendet dieses Template dann zur Laufzeit um die einzelnen Zeilen der Liste als HTML zu rendern.

Die praktische Verwendung von Templates ist nicht immer problemlos oder gar selbsterklärend. Darum will ich in diesem Artikel versuchen ein paar Erfahrungen aus der praktischen Anwendung dieser Templates, und wie man mit Ihnen sinnvoll arbeiten kann, weiter zu geben.

Viele der folgenden Beispiele können auch ausserhalb von Templates verwendet werden, aber für die effiziente Arbeit mit Templates sind sie besonders hilfreich.

Templates im Designer

Vielleicht als erstes der Hinweis, wo/wie man die Templates im Webforms-Designer von Visual Studio findet, da ich selbst relativ lange danach gesucht habe.

Variante 1: Markiert in der "Design"- oder "Split"-Ansicht das Server-Control und klickt auf dem SmartTag (den kleinen Pfeil rechts oben auf dem eben markierten Server-Control). Dann klickt auf "Edit Templates". Jetzt könnt Ihr unter "Display" das zu bearbeitende Template auswählen. z.B. "HeaderTemplate", "ItemTempate", usw.

Variante 2: Diese Variante steht nicht bei allen Server-Control zur Verfügung. Markiert in der "Design"- oder "Split"-Ansicht das Server-Control und klickt im "Properties"-Panel von VisualStudio um unteren Rand auf das Link des Templates welches Ihr bearbeiten wollt.

Befindet sich das Server-Control im Template-Bearbeitungsmodus, so könnte Ihr mit dem Designer darin Controls platzieren uvm. Um den Bearbeitungsmodus zu beenden öffnet man erneut den SmartTag und wählt "End Template Editing".

Sandkasten "Template"

Innerhalb eines Templates verhalten sich einige Dinge etwas anders als direkt auf der Seite (Page) oder dem Control (WebControl).

Innerhalb des Templates habt Ihr keinen typisierten Zugriff auf die "Aussenwelt" bzw. die Controls ausserhalb des Templates. Wollte Ihr z.B. von einem Eventhandler eines Controls, welches sich innerhalb eines Templates befindet, auf ein anderes Control zugreifen, müsst Ihr Euch mittels der Funktion


eine Referenz auf dieses Control suchen und diese casten. Dies aus dem Grund, da die Instanzen der Controls innerhalb eines Templates durch das übergeordnete Server-Control dynamisch zur Laufzeit ein- oder mehrmals instantiert werden.

Es bleiben aber einige Sachen: z.B. das Databinding. Wir das ServerControl auf eine DataSource gebunden, bekommt Ihr z.B. im ItemTemplate den entsprechenden Datensatz für das Databinding geliefert. Ihr könnt aus dem Markup-Code des Templates auch auf Eigenschaften und Methoden des unterliegenden Page-Objektes zugreifen oder die ASP.Net Databinding-Mechanismen einsetzen. Beispiele hierzu folgen.

Werte mittels Databinding anzeigen (unidirektional)

Einige von Euch kennen evtl. die Methode Eval(). Diese Funktion liefert den Feldwert des als Parameter angegeben Feldes als object zurück.

<asp:DataList ID="_myList" runat="Server">
     <asp:Label ID="_firstname" runat="Server" Text='<%# Eval("Firstname") %>'></asp:Label>

Das obige Beispiel verwendet den ASP.Net-Tag <%# ... %> um das Property Text des Labels an das Resultat der Funktion Eval("Firstname") zu binden. Die Funktion Eval() Ihrerseits liefert den Inhalt des Property Firstname des unterliegenden Dataitem (z.B. einem Datensatz aus einer DB oder ein In-Memory Objekt). Dieses Resultat ist ein object und wird von .Net automatisch mittels dessen Methode ToString() in eine Zeichenkette umgewandelt.

Zu beachten in diesem Fall ist, dass der Tag mittels Singlequotes formuliert wird während der Funktionsparameter mittels Doublequotes geschrieben werden muss.

Obiges Beispiel kann man auch weiter ausbauen. Nehmen wir an, wir wollen ein Feld Birthday vom Typ DateTime als in kurzer Datumsnotation andrucken, so könnten wir folgenden Markup verwenden:

<asp:DataList ID="_myList" runat="Server">
     <asp:Label ... Text='<%# ((DateTime)Eval("Birthday")).ToShortDateString() %>'></asp:Label>

Wie Ihr seht, kann man bereits im Markup-Code eines Webforms einiges anstellen. Ich rate aber, hiermit vorsichtig zu sein, da es bei sehr komplexen Ausdrücken schnell mal unübersichtlich werden kann und sich auch die Wartung des Markups erschweren kann.

Werte mittels Databinding binden (bidirektional)

Analog zur oben beschrieben Funktion Eval() kann man mittels der Methode Bind() ein Property an ein Feld binden, sodass dieses bei Änderungen den neuen Wert wieder zugewiesen bekommt. Dies macht z.B. bei TextBox-Controls Sinn.

<asp:DataList ID="_myList" runat="Server">
     <asp:Textbox ID="_firstname" runat="Server" Text='<%# Bind("Firstname") %>'></asp:Label>

Funktionsaufrufe in die CodeBehind-Klasse

In ASP.Net 2.0 verfügt jedes Webform normalerweise über eine sog. "CodeBehind" Klasse in einer separaten .CS-Datei. Die ASP.Net Runtime erzeugt aus dem Markup der Seite eine partial Klasse und kompiliert diese zusammen mit der partial Klasse aus besagter .CS-Datei zu einer einzigen Klasse. Darum sind übrigens die generierten Klassengerüste in der .CS-Datei als partial gekennzeichnet.

Wie oben beschrieben, können wir auch im Markup-Code der Seite einiges an Logik in Form von C#-Code einbetten. Das dies aus Übersichts- und Wartungsgründen aber mit Bedacht angewendet werden sollte. Fragt sich, wohin mit dem C#-Code?

Geeignet dafür ist die C#-Klasse in der CodeBehind-Datei. Aus dem Markup-Code könnt Ihr jedes Property oder jede Methode, welche mind. protected ist, aufrufen. Obiges Beispiel mit der Formatierung des Geburtstages könnten wir also folgendermassen umbauen:


<asp:DataList ID="_myList" runat="Server">
     <asp:Label ... Text='<%# GetFormattedDate(Eval("Birthday")) %>'></asp:Label>


public partial class MyPage : System.Web.UI.Page

  protected String GetFormattedDate(object date)
    if (date is DateTime)
      return ((DateTime)date).ToShortDateString();

    return String.Empty;


Dieses Beispiel ist natürlich noch sehr einfach in der Logik und darum lässt es sich auch gut im Markup erledigen. Wird diese Datumsformatierung aber mehrfach verwendet oder wird es komplexer wie z.B. die Berechnung von Farbwerten anhand von Datenwerten, wird die Implementation mittels CodeBehind schnell mehr als sinnvoll.

Zugriff auf das unterliegende DataItem

Anstatt mit Eval() oder Bind() auf einzelne Felder des darunterliegenden DataItems zuzugreifen, können wir auch auf das DataItem direkt zugreifen. Nehmen wir an, unsere Liste hätte folgende Klasse Person als DataItem:

public class Person

  public String Firstname { get; set; }
  public String Lastname { get; set; }

  public override string ToString()
    return String.Format("{0} {1}", Firstname, Lastname);

Die Klasse Person definiert zwei Properties (automatische Properties, ab .Net 3.5) und überschreibt die Funktion ToString() um Vor- und Nachname als Textrepräsentation von Personen auszugeben. Nun wollen wir in der DataList mittels eines Label's die vollständigen Namen gemäss unserem ToString() ausgeben. Wir können also kein Eval("Lastname") verwenden und wir wollen die Formatierung auch nicht erneut in unserer CodeBehind-Klasse ausprogrammieren, da sie ja bereits zentral auf der Klasse Person implementiert wurde.


<asp:DataList ID="_myList" runat="Server">
     <asp:Label ... Text='<%# Container.DataItem %>'></asp:Label>

Mit Container gelangen wir auf das Server-Control, also den Container unserer Template-Instanz. Über dieses können wir dann mittels DataItem auf das aktuelle DataItem zugreifen. Da wir dieses einem String-Property (Text) zuweisen, wandelt .Net 3.5 dieses automatisch in einen String um, sodass wir uns das abschliessende .ToString() sparen können. Ältere .Net-Versionen würden den Ausdruck

<%# Container.DataItem.ToString() %>


Natürlich könnten wir das DataItem auch an Funktionen weitergeben oder es casten und direkt auf dessen Methoden und Eigenschaften zugreifen.


<asp:DataList ID=_myList runat=Server>
     <asp:Label  Text=<%# GetBirthday((Person)Container.DataItem) %>></asp:Label>


protected String GetBirthday(Person person)
  if (person != null)
    return person.Birthday;

  return String.Empty;


Anfangs tat ich mich mit den Templates etwas schwer und habe innerhalb von Templates viel mit FindControl() gearbeitet. Davon bin ich in der Zwischenzeit wieder abgekommen, da sehr viel "by Name" lief und ge-castet werden musste.

Heute verwende ich hauptsächlich die oben beschrieben Mechanismen und weiche auf FindControl() nur im Notfall aus. Mit den Templates lässt sich recht gut arbeiten, wenn man sich an deren Regeln hält.

Übrigens können auch eigene Webcontrols Templateunterstützung beigebracht werden und es ist nicht mal so aufwendig. Dies aber in einem anderen Blog ;-)

Encodo C# Handbook


The first publicly available version of the Encodo C# Handbook is ready for download! It covers many aspects of programming with C#, from naming, structural and formatting conventions to best practices for using existing and developing new code.

Here's the backstory on how and why we decided to write a formal coding handbook.

Here at Encodo, we started working with C# less than a year ago. We decided early on that we would be building a framework on which we would base our projects, both internal and external. That framework now exists and forms the core of several client projects: it's called "Quino" and you can find out more at the Quino home page. Since we were library-oriented from the get-go, we were very aware of our coding style and were interested to know how other projects and developers organized and formatted their code and how they worked with the .NET framework.

Naturally, there was a lot of documentation to be found in Microsoft's MSDN, but it was scattered over dozens of pages and wasn't very useful as a consolidated reference. It also made recommendations that Microsoft themselves ignored in their own code. Searching with Mr. Google brought up numerous references to a manual from iDesign, which is quite good. Philips also has a pretty extensive manual.

We started with those as well as a bushel of ad-hoc rules we'd developed over the years and an "Encodo Style" slowly evolved. Where we diverged from other companies is that we decided to write it all down. Every last niggling bit of it. The handbook was in a very ad-hoc format when we hired Marc and realized that we'd need to get him up to speed on how we work at Encodo. After an initial formatting effort, there followed a few months of slow accretion of new rules as well as a refinement of existing ones.

Where our guide differs from the others is in the organization; there are clear sections for structure, formatting, naming, language elements and best practices instead of just a hodge-podge of rules. We've also done our best to weed out conflicting or repeated rules. The current handbook (version 1.4) also includes rules for those of you, like us, who've moved on to VS2008 and the wonderful world of .NET 3.5.

Though there will certainly be updates as we learn more, we hope you like what we've got so far and welcome any and all feedback!

For your quick persusal, here's the current table of contents:

Table of Contents
**1	General**
1.1	Goals
1.2	Scope
1.3	Fixing Problems in the Handbook
1.4	Fixing Problems in Code
1.5	Working with an IDE

**2	Design Guide**
2.1	Abstractions
2.2	Inheritance vs. Helpers
2.3	Interfaces vs. Abstract Classes
2.4	Modifying interfaces
2.5	Delegates vs. Interfaces
2.6	Methods vs. Properties
2.7	Virtual Methods
2.8	Choosing Types
2.9	Design-by-Contract
2.10	Controlling API Size

**3	Structure**
3.1	File Contents
3.2	Assemblies
3.3	Namespaces
3.3.1	Usage
3.3.2	Naming
3.3.3	Standard Prefixes
3.3.4	Standard Suffixes
3.3.5	Encodo Namespaces
3.3.6	Grouping and ordering

**4	Formatting**
4.1	Indenting and Spacing
4.1.1	Case Statements
4.2	Brackets (Braces)
4.2.1	Properties
4.2.2	Methods
4.2.3	Enumerations
4.2.4	Return Statements
4.3	Parentheses
4.4	Empty Lines
4.5	Line Breaking
4.5.1	Method Calls
4.5.2	Method Definitions
4.5.3	Multi-Line Text
4.5.4	Chained Method Calls
4.5.5	Anonymous Delegates
4.5.6	Lambda Expressions
4.5.7	Ternary and Coalescing Operators

**5	Naming**
5.1	Basic Composition
5.1.1	Valid Characters
5.1.2	General Rules
5.1.3	Collision and Matching
5.2	Capitalization
5.3	The Art of Choosing a Name
5.3.1	General
5.3.2	Namespaces
5.3.3	Interfaces
5.3.4	Classes
5.3.5	Properties
5.3.6	Methods
5.3.7	Parameters
5.3.8	Local Variables
5.3.9	Events
5.3.10	Enumerations
5.3.11	Generic Parameters
5.3.12	Lambda Expressions
5.4	Common Names
5.4.1	Local Variables and Parameters
5.4.2	User Interface Components
5.4.3	ASP Pages

**6	Language Elements**
6.1	Declaration Order
6.2	Visibility
6.3	Constants
6.3.1	readonly vs. const
6.3.2	Strings and Resources
6.4	Properties
6.4.1	Indexers
6.5	Methods
6.5.1	Virtual
6.5.2	Overloads
6.5.3	Parameters
6.5.4	Constructors
6.6	Classes
6.6.1	Abstract Classes
6.6.2	Static Classes
6.6.3	Sealed Classes & Methods
6.7	Interfaces
6.8	Structs
6.9	Enumerations
6.9.1	Bit-sets
6.10	Nested Types
6.11	Local Variables
6.12	Event Handlers
6.13	Operators
6.14	Loops & Conditions
6.14.1	Loops
6.14.2	If Statements
6.14.3	Switch Statements
6.14.4	Ternary and Coalescing Operators
6.15.1	Formatting & Placement
6.15.2	Styles
6.15.3	Content
6.16	Grouping with #region Tags
6.17	Compiler Variables
6.17.1	The [Conditional] Attribute
6.17.2	#if/#else/#endif

**7	Patterns & Best Practices**
7.1	Safe Programming
7.2	Side Effects
7.3	Null Handling
7.4	Casting
7.5	Conversions
7.6	Object Lifetime
7.7	Using Dispose and Finalize
7.8	Using base and this
7.9	Using Value Types
7.10	Using Strings
7.11	Using Checked
7.12	Using Floating Point and Integral Types
7.13	Using Generics
7.14	Using Event Handlers
7.15	Using var
7.15.1	Examples
7.16	Using out and ref parameters
7.17	Error Handling
7.17.1	Strategies
7.17.2	Error Messages
7.17.3	The Try* Pattern
7.18	Exceptions
7.18.1	Defining Exceptions
7.18.2	Throwing Exceptions
7.18.3	Catching Exceptions
7.18.4	Wrapping Exceptions
7.18.5	Suppressing Exceptions
7.18.6	Specific Exception Types
7.19	Generated code
7.20	Setting Timeouts
7.21	Configuration & File System
7.22	Logging and Tracing
7.23	Performance

**8	Processes**
8.1	Documentation
8.1.1	Content
8.1.2	What to Document
8.2	Testing
8.3	Releases
Generics and Delegates in C#

The term DRY -- Don't Repeat Yourself -- has become more and more popular lately as a design principle. This is nothing new and is the main principle underlying object-oriented programming. As OO programmers, we've gotten used to using inheritance and polymorphism to encapsulate concepts. Until recently, languages like C# and Java have had only very limited support for re-using functionality across larger swathes of code.1 To illustrate this, let's take a look at a simple class with a descendent as well as some code that deals with lists of these objects and their properties.

Let's start with some basic definitions2:

class Pet
  public string Name
    get { return _Name; }
  public bool IsHouseTrained
    get { return _IsHouseTrained; }

  private string _Name;
  private bool _IsHouseTrained = true;

class Dog : Pet
  public void Bark() {}

class Owner
  public IList<Pet> Pets
    get { return _Pets; }

  private IList<Pet> _Pets = new List<Pet>();

This is basically boilerplate for articles about inheritance, so let's move on to working with these classes. Imagine that the Owner wants to find all pets named "Fido":

IList<Pet> FindPetsNamedFido()
  IList<Pet> result = new List<Pet>();
  foreach (Pet p in Pets)
    if (p.Name == "Fido")
  return result;

Again, no surprises yet. This is a standard loop in C#, using the foreach construct and generics to loop through the list in a type-safe manner. Applying the DRY principle, however, we see that we're going to end up writing a lot of these loops -- especially if we offer a lot of different ways of analyzing data in the list of pets. Essentially, the code above is a completely standard loop except for the condition -- the (p.name == "Fido") part. We can then imagine a function with the following form:

IList<Pet> FindPets(??? condition)
  IList<Pet> result = new List<Pet>();
  foreach (Pet p in Pets)
    if (condition(p))
  return result;

Introducing Delegates

Now we need to figure out what type condition has. From the function body, we see that it takes a parameter of type Pet and returns a bool value. In C#, the definition of a function is called a delegate, which is also a keyword; for the type above, we write:

delegate bool MatchesCondition(Pet item);

As mentioned above, the return type is a bool, the single parameter is of type Pet, and the delegate is identified by the name MatchesCondition. The name of the parameter is purely for documentation. We can then rewrite the function signature above using the delegate we just defined:

IList<Pet> FindPets(MatchesCondition condition) {...}

We've managed to move the looping code for many common situations into a shared method. Now, how do we use it? We originally wanted to find all pets named "Fido", so we need to define a function that does just that, matching the function signature defined by MatchesCondition:

bool IsNamedFido(Pet p)
  return p.Name == "Fido";

In this fashion, we can write any number of methods, which check various conditions on Pets. To use this method, we simply pass it to the shared FindPets method, like this:

IList<Pet> petsNamedFido = FindPets(IsNamedFido);
IList<Pet> petsNamedRex = FindPets(IsNamedRex);
IList<Pet> houseTrainedPets = FindPets(IsHouseTrained);

Anonymous Methods

This is better than the previous situation -- in which we would have repeated the loop again and again -- but we can do better. The problem with this solution is that it tends to clutter the class (Owner in this case) with many little methods that are useful only in conjunction with FindPets. Even if the methods are private, it's a shame to have to use a full-fledged method as a kludge for instancing a piece of code to be called. The C# designers thought so too, so they added anonymous methods, which have a parameter list and a body, but no name. Using anonymous methods, we can replace the methods, IsNamedFido, IsNamedRex and IsHouseTrained, with the following code:

IList<Pet> petsNamedFido = FindPets(delegate(Pet p) { return p.Name == "Fido"; });
IList<Pet> petsNamedRex = FindPets(delegate(Pet p) { return p.Name == "Rex"; });
IList<Pet> houseTrainedPets = FindPets(delegate(Pet p) { return p.IsHouseTrained; });

Again, the keyword delegate introduces a parameter list and body for the anonymous method.

Generic Functions

All of the code above uses the generic IList and List classes. None of the looping code in FindPets is dependent on the type of the list element except for the condition. It would be really nice if we could re-use this code not just for Pets, but for any collection of elements. Generic functions to the rescue. A generic function has one or more generic parameters, which can be used throughout the parameter list and implementation body. The first step in making FindPets fully generic is to change the definition of MatchesCondition:

delegate bool MatchesCondition<T>(T item);

As with a generic class, the function's generic arguments appear within pointy brackets after the identifier -- in this case, the single generic parameter is named T. Pet has been replaced as the type of the parameter as well. In order to finish making FindPets fully generic, we'll have to pass it a list to work with (right now it always uses Pets) and change the name, so as to avoid confusion:

IList<T> FindItems<T>(IList<T> list, MatchesCondition<T> condition)
  IList<T> result = new List<T>();
  foreach (T item in list)
    if (condition(item))
  return result;

We're not quite done yet, though. If you look closely at the function body, all it does is enumerate the items in the parameter list. Therefore, we can loosen the type-constraint of the parameter from IList to IEnumerable, so that it can be called with any collection from all of .NET.

IList<T> FindItems<T>(IEnumerable<T> list, MatchesCondition condition) {...}

And ... we're done. Fully generic! Let's see how that looks using the examples from above:

IList<Pet> petsNamedFido = FindItems<Pet>(Pets, delegate(Pet p) { return p.Name == "Fido"; });
IList<Pet> petsNamedRex = FindItems<Pet>(Pets, delegate(Pet p) { return p.Name == "Rex"; });
IList<Pet> houseTrainedPets = FindItems<Pet>(Pets, delegate(Pet p) { return p.IsHouseTrained; });

Though we've lost something in legibility, we've gained quite a bit in re-use. Imagine now that an Owner also has a list of Vehicles, a list of Properties and a list of Relatives. You only have to write the conditions themselves and you can search any type of container for items matching any condition ... all in a statically type-safe manner:

IList<Pet> petsNamedFido = FindItems<Pet>(Pets, delegate(Pet p) { return p.Name == "Fido"; });
IList<Vehicle> redCars = FindItems<Vehicle>(Vehicles, delegate(Vehicle v) { return (v is Car) and (((Car)v).Color == Red); });
IList<Property> bigLand = FindItems<Property>(Properties, delegate(Property p) { return p.Acreage >= 1000; });
IList<Relative> deadBeats = FindItems<Relative>(Relatives, delegate(Relative r) { return r.MoneyOwed > 0; });

Note: C# 2.0 offers this functionality in the .NET library for both the List and Array classes. In the official version, MatchesCondition is called Predicate and FindItems is called FindAll. It is not known why these functions don't apply to all collections, as illustrated in our example.

Extension Methods

Can we do something about the legibility of the solution from the last section? In C# 2.0, we've reached the end of the line. If you've been following the development of "Orcas" and C# 3.0/3.5, you might have heard of extension methods3, which allow you to extend existing classes with new functions without inheriting from them. Let's extend any IEnumerable with our find function:

public static class MyVeryOwnExtensions
    public static bool FindItems<T>(this IEnumerable<T> list, MatchesCondition<T> condition)
      // implementation from above

The keyword this highlighted above indicates to the compiler that FindItems is an extension method for the type following it: IEnumerable<T>. Now, we can call FindItems with a bit more legibility and clarity, dropping both the generic parameter the actual argument (Pet and Pets, respectively) and replacing with a method call on Pets directly.

IList<Pet> petsNamedFido = Pets.FindItems(delegate(Pet p) { return p.Name == "Fido"; });


For brevity's sake, the examples in this section assume use of the extension method defined above. To use the examples with C# 2.0, simply rewrite them to use the non-extended syntax.

We use anonymous methods to avoid declaring methods that will be used for one-off calculations. However, larger methods or methods that are reused throughout a class properly belong to the class as full-fledged methods. At the top, we defined a descendent of the Pet class called Dog. Imagine that each Owner has not only a list of Pets, but also a list of Dogs. Then we'd like to bring back our IsNamedFido method in order to be able to apply it against both lists (copied from above):

bool IsNamedFido(Pet p)
  return p.Name == "Fido";

Now we can use this method to test against lists of pets or lists of dogs:

IList<Pet> petsNamedFido = Pets.FindItems(IsNamedFido);
IList<Dog> dogsNamedFido = Dogs.FindItems(IsNamedFido);

The example above illustrates an interesting property of delegates, called contravariance. Because of this property, we can use IsNamedFido -- which takes a parameter of type Pet -- when calling FindItems<Dog>. That means that IsNamedFido can be used with any list containing objects descended from Pet. Unfortunately, contravariance only applies in this very special case; the type of dogsNamedFido cannot be IList<Pet> because IList<Dog> does not conform to IList<Pet>.4

However, this courtesy extends only to predefined delegates. If we wanted to replace the call to IsNamedFido with a call to an anonymous method, we'd be forced to specify the exact type for the parameter, as shown below:

IList<Dog> dogsNamedFido = FindItems(o.Dogs, delegate(Dog d) { return d.Name == "Fido"; });

Using Pet as the type parameter does not compile even though it is simply an in-place reformulation of the previous example. Enforcing the constraing here does not restrict the expressiveness of the language in any way, but it's interesting to note that the compiler relaxes the rule against contravariance only when it absolutely has to.


In the previous section, we created a method, IsNamedFido instead of using an anonymous method to avoid duplicate code. In that spirit, suppose we further believe that having a name-checking function that checks a constant is also not generalized enough5. Suppose we write the following function instead:

bool IsNamed(Pet p, string name)
  return p.Name == name;

Unfortunately, there is no way to call this method directly because it takes two parameters and doesn't match the signature of MatchesCondition (and even contravariance won't save us). You can, however, drop back to using a combination of the defined method and an anonymous method:

IList<Pet> petsNamedFido = Pets.FindItems(delegate (Pet p) { return IsNamed(p, "Fido"); });

This version is a good deal less legible, but serves to show how you can at least pack most of the functionality away into an anonymous method, repeating as little as possible. Even if the anonymous method uses local or instance variables, those are packed up with the call so that the values of these variables at the time the delegate is created are used.

For comparison, Java does not support proper closures, requiring final hacks and creation of anonymous classes in order to perform the task outlined above. Various proposals aim to extend Java in this direction, but, as of version 6, none have yet found their way into the language specification.


On a final note, it would be nice to have a cleaner notation for formulating the method call above -- in which additional parameters to a function must be collected manually into an anonymous method. The Eiffel programming language offers such an alternative, calling their delegates agents instead6. The conformance rules for agents for a method signature like MatchesCondition<T> are different, requiring not that the signature match perfectly, but only that all non-conforming parameters be provided at the time the agent is created.

Eiffel uses question marks to indicate where actual arguments are to be mapped to the agent, so in pseudo-C# syntax, the method call above would be written as:

IList<Pet> petsNamedFido = Pets.FindItems(agent IsNamed(?, "Fido"));

This is much more concise and expressive than the C# version. It differs enough from an actual function call -- through the rather obvious and syntax-highlightable keyword, agent -- but not so much as to suggest an entirely different mechanism. The developer is made aware that it's not a regular method call, but a delayed one. C# could easily implement such a feature as pure syntactic sugar, compiling the agent expression to the previous formulation automatically. Perhaps in C# 4.0?

All in all, though, C#'s support for generics and closures and DRY programming is eminently useful and looks only to improve in upcoming versions like LINQ, which introduces inferred typing, a mechanism that will improve legibility and expressiveness dramatically.

IList<Pet> dogsNamedFido = Dogs.FindItems(IsNamedFido);
dogsNamedFido.Add(new Cat());

This would cause a run-time error because the actual instance attached to dogsNamedFido can only contain Dogs. Instead of adding run-time checking for this special case and enhancing the expressiveness of the language -- as Eiffel or Scala, for example, do -- C# forbids it entirely, as does Java.

  1. This article covers ways of statically checking code validity, so dynamically typed languages, like Smalltalk, Ruby or Python, while providing the same functionality, don't apply because they can't verify correctness at compile-time. On the other hand, there are languages -- like Eiffel, which has had generics from the very beginning, but never really caught on (though it now runs under .NET) or C++, which has the powerful STL, but is horrifically complex for general use -- which have offered some or all of the features discussed in this article for quite some time now.

  2. The notation is C# 2.0, which does not yet support automatic properties.

  3. As described in New "Orcas" Language Feature: Extension Methods by Scott Guthrie

  4. This reduces the expressiveness of the language, but C# forbids this because it cannot statically prevent incorrect objects from being added to the resulting list. Building on the example above, if we assume a class Cat also descendend from Pet, it would then be possible to do the following:

  5. For the irony-impaired: yes, that was sarcasm.

  6. For more information on the Eiffel feature, see Agents in the online manual. For further information, the articles, Generic type parameter variance in the CLR and Using ConvertAll to Imitate Native Covariance/Contravariance in C# Generics, are also useful. For more information on closures in C#, see C#: Anonymous methods are not closures and The Power of Closures in C#.