This post is about an application design that makes use of sandboxing and restrictive security permissions as well as strong named assemblies to achieve a componentized application structure which can be security audited. This is useful if you are providing some infrastructure where other people’s code runs in or if you need to provide proof of the application's security.
Even for other standard LoB architectures I think it makes sense to consider the way an application is partitioned. Traditionally most people will create different layers for their application, but layers are normally defined by their functional responsibilities. This is to some extent true of the onion architecture as well, but this architecture forces us to consider dependencies more than deployment. For both approaches I think it is fair to say that people focus more on what responsibilities and dependencies a particular layer or ring has and not so much about which restrictions it should have. If you look at the inner core of the onion architecture, then it should have very few dependencies as it is surrounded by services which should be doing all the "practical" work Ask yourself: How much does my domain logic need to interact with the file system or the database?. But how often do you police this? By restricting permissions in your application you can police these dependencies.
I recently posted a small spike project to look into possibilities for sandboxing parts of your application. It turns out that .NET has had excellent support for sandboxing from the very beginning, and those of us who have been doing .NET for a while certainly remember the troubles with partial trust scenarios. Unfortunately the trend seems to have been to jump to a more permissive mode rather than actually working with the strengths of Code Access Security (CAS). The .NET Framework is actually built on the idea that you can pull code from different sources and run it with restricted permissions, so it has a very fine grained support for specifying runtime permissions by giving an AppDomain a specific PermissionSet. However the net part of .NET seems to have moved to My Computer.
You can find lots of posts about how to sandbox some code, but this post is about building up your application of different sandboxes. The benefit of this application design is that you can separate the different components and assign only the required security permissions. The posts about sandboxing code generally rely on the predefined security profiles of the .NET Framework, but if you examine these then you will see that code run from My Computer is given Full Trust. So while the sandbox may make it easy to isolate some code and unload assemblies, it doesn't do much to restrict the security permissions of that code. This is a problem if you are hosting plug-ins because you have no way of knowing what it will do to the user’s computer and so you can only leave it up to the user to protect him- or herself. With the new scripting possibilities offered by Roslyn, it will be easy to allow the user to write some code and execute it in your application (hopefully this means goodbye to VBA). But unless you do some extensive parsing, you are again opening up your application to abuse. It is also a problem if people start adding hidden dependencies into a component, for example interacting with the file system, and thereby bypassing your layering (or "onioning").
Building on the idea of sandboxes with restricted security permissions I put together the mentioned spike project where the application is built up by a series of sandboxes components. Each component is given a predefined set of permissions which can be audited in advance by your security responsible. One of the problems with getting too restrictive on security is that either you end up giving too many permissions or having too many different sandboxes each doing their little thing. The way around this is to put some essential security sensitive code in strong named assemblies which can be thoroughly reviewed and then fully trusted. These assemblies can then be referenced by restricted code to provide certain resources. The benefit of strong naming is that you can tell the AppDomain to fully trust a particular strong named assembly (or several) when creating the AppDomain. This makes it possible to have parts of the code do stuff that requires more trust while still keeping the AppDomain locked down. This is useful if you want to make sure that your layer (or ring slice) doesn’t inadvertently do something outside its responsibility.
So how is this done? The core of the application is the ComponentManager which loads up the components defined in the application manifest. Each component is loaded into a separate AppDomain which is given the defined permissions. As mentioned, the support for this is built in to the framework and the security permissions can be defined in a simple XML structure.
For each AppDomain it is also possible to specify a number of strong named assemblies which can be used to perform tasks which require special privileges. Using this approach the security of the application can quite easily be audited because the review can focus on the strong named assemblies as well as the given security permissions. I used a text based approach to defining the strong names so the ComponentManager doesn’t have to load the actual assembly to get the information. This also means that when the AppDomain is unloaded the strong named assembly is unloaded as well instead staying because it is loaded by the ComponentManager.
Obviously this approach puts security over performance. You will obviously have to think about how you partition your application into components to achieve the balance that is right for you. Specifying permissions for each component can also be a tedious activity and not be required everywhere. Some components may need to do multiple things if that is what is required to achieve your performance goals, but at least you have made a conscious choice to make it so (hopefully).