Most of the time, when programming against the Microsoft CRM Web service, we get reasonable error messages. However, on occasion we get the inscrutable “Generic SQL Error”. I used to solve these by trial and error or intuition, but it turns out that there is better way.

Using SQL Server profiler can give us the answers that we are looking for. Check out this blog post for the details. Cheers.


I’m continuing my series of posts on configuration patterns in .NET. One of the things I had to do recently was to retrofit some code that I was writing to use an alternate configuration file. That is, I needed the ability to specify file from the commandline to grab a config section rather than having the .NET framework use the default app.config file that resides in the application directory.

It turns out that this is easy to do. There is a method on the ConfigurationManager called OpenMappedMachineConfiguration that will let you define another config file to use.

I have a little method for pulling in the configuration from a given file:

// MSCrmConfigurationSection is my custom config section type
MSCrmConfigurationSection GetConfig( string in_filename ) {
	ConfigurationFileMap fileMap = new ConfigurationFileMap( in_filename );
	Configuration configuration = System.Configuration.ConfigurationManager.OpenMappedMachineConfiguration( fileMap );
	MSCrmConfigurationSection config = ( MSCrmConfigurationSection )configuration.GetSection( "MSCrm" );
	return config;

A really nice feature that we get for free is that we can mix and match files at will, pulling configuration sections from different files. And, as a bonus, the app.config file still takes effect, so any assembly redirects or publisher policies that are defined there still work as usual. Pretty awesome.

Here is a quick post to outline the different ways that .NET custom config files can be written. The MSDN documentation does a bad job generally at giving examples of what the actual XML markup looks like in their examples. They go through all of the different types involved without showing what the desired markup looks like.

Briefly, by default, custom configuration uses XML attributes, unlike the AppSettings section, which uses a special configuration case of a key/value dictionary. I think that this is confusing and is not well treated in MSDN.

If we want a custom appSettings-like config like the following:

      <add key="NewKey0" value="Monday, March 30, 
           2009 1:36:33 PM" />
      <add key="NewKey1" value="Monday, March 30, 
           2009 1:36:40 PM" />

We need to use a DictionarySectionHandler instead of the usual ConfigurationSection.

Using a ConfigurationSection, our XML markup would look like this:

     NewKey0="Monday, March 30, 2009 1:36:33 PM"
     NewKey1="Monday, March 30, 2009 1:36:40 PM"

It is possible to create elements as children with some additional coding using ConfigurationElement.

There is a long CodeProject article on this stuff that uses C++ here that I found useful and there is an extensive treatment on configuration here, but I’ve abridged things greatly here for my own reference. Hopefully this helps you too.

I have built a few tools to make life easier for myself. One of the tools that has proven itself invaluable is a metadata generator that I use to create a list of entities in a newly-created CRM bizorg.

Although the API supports most things that we’d want to do, there are a few tricky things that aren’t that obvious from the documentation. I’m going to go into this in more detail in a future post, but there are a few key takeaways that I want to make sure I remember, so I’m posting them here now.

1) Creating an entity doesn’t create any attributes other than the primary attribute.
Even if we give CRM a fully-populated entity metadata graph, only the primary attribute will be created on the CreateEntityRequest call.

2) An entity must have a primary attribute specified.
Creating the entity will fail if one string attribute is not provided and the primary attribute field is set to this entity. Both of these conditions must be met.

3) Lookup attributes cannot be created via the normal attribute metadata service call.
Entity references are created via a completely separate call.

4) Creating lookup attributes will fail if the referenced entity does not exist.

5) Entity names must adhere to the CRM naming prefix convention.
Entity names and field names must have a prefix and an underscore in order to be considered valid schema names.

6) Cryptic “Generic SQL” errors are often caused by missing labels
I’m still compiling a checklist of things that cause this frustrating error. Many times a call will fail with a SQL error, and it is very difficult to figure out what went wrong since there is no detail given.

7) Adding entities and fields don’t require that we publish, but adding new picklist options does require a separate publish action before the items can be used.

8) Boolean fields must be specially treated and given labels for the true and false textual values.
If this is not done, the field will silently fail to be created.

That’s a brief list for now. Expanding on all of these things will take a whole series of posts I’m sure. This stuff is way under-documented and is hard-won knowledge, so I’ll be sure to give it a proper treatment here on the blog.

I have been meaning to write a post on configuration patterns in the .NET framework. Just about any app we write has some configuration data associated with it, and it sometimes gets ignored until the app is in production or at the very least until very late in the development cycle.

Microsoft provides a lot of options out of the box with the .NET framework, and even more through the Configuration Management Application Block. Most configuration concerns are the same whether it is a Windows app or a Web app, but since there are a lot of different options, how do we pick what is best for our application?

Before we get into any .NET-specific items, here are some general guidelines that I have used in the past:

Don’t rely on implicit configuration – allow the app to be explicitly configured programmatically

In the case of a library, we’ll want to be able to override configuration as needed for testing and special use cases. For example, I have a library for accessing CRM that has a configuration section. In some cases I want to test different configurations against different CRM servers in my tests, so I need a way to override the configuration file. Yes, you can do this in a convoluted way using the .NET framework, but it is much better to bake this into the library design. Another scenario is doing data export/import between multiple servers. I have a few tools that use the library I mentioned to take data from one CRM installation and move it to another one. If the library only supported one configuration section we’d have to build this functionality into the library, which is a concern that shouldn’t be part of the underlying, but of the application.

However, a very nice feature of a library is to have a default configuration, as well as a default configuration location. This means that if we provide a config specifically it will override everything else, but if not, we can make it easy on ourselves by providing some sensible defaults or by using the app.config file if it exists. This kind of fallthrough can be a little tricky, so it is worth thinking about up-front so that things go smoothly and predictably as development progresses. Configuration behavior is one of those things that is tough to change once it gets into production.

Use separate classes to hold configuration data

This should go without saying, but I’ll say it anyway – don’t put configuration variables into any other classes. Keep these concerns separate. If a class needs the configuration data, use composition to make the configuration object a private member. Another thing to watch out for is not using a configuration data in their own objects at all. I’ve seen a lot of project where configuration data was scattered all over the place. Don’t do this!

I’m going to follow up with some code examples soon so stay tuned.

If you’re like me, you have clients behind firewalls and special development environments configured to mirror the client’s environment for local testing. In my case a lot of my development environments have Microsoft CRM installed on them as a self-contained development environment replete with Active Directory. This means that the virtual machine does not have any local accounts, only domain accounts. When I log in, I’m logging into a domain.

This works out pretty well most of the time. However, recently I wanted to use my SSH backhaul trick to grab some data from a client’s site back through their firewall. In order to get this to work, I had to do some extra experimentation with CopSSH user accounts and my VirtualBox settings.

First off, let me recap exactly what we are trying to do. It might be worthwhile to look at my SSH backhaul article first, but what we are doing is running a secure shell server locally on the virtual machine and connecting to it from the remote server using Putty. This lets us access things like Microsoft CRM services on the remote machine for doing things like data dumps and schema upgrades.

I’m using VirtualBox as my virtualization environment. I happen to be using NAT (network address translation) instead of bridged network connection. This means that there is one extra step that I didn’t cover in my previous article, which I will outline here. The complete end-to-end scenario becomes:

Putty on remote server -> firewall on my local network -> VirtualBox NAT on my laptop -> VirtualBox VM -> CopSSH daemon

So I covered everything in the previous article except for setting up VirtualBox NAT. Fortunately it is very simple. We need to set up port forwarding across the NAT. To do this go into the network settings of the running virtual machine and look for the button that says “port forwarding”. This lets you set up the host and guest port. I had to set the IP addresses rather than leave them blank, but what I did was set both to, which means “all addresses”. Here is a screenshot:

I’m mapping the SSH port 22 to port 2222 to avoid conflicts with the native sshd daemon that is running on my Ubuntu laptop.

Once we have this set up, we can test the connection locally by using Putty to connect to localhost on port 2222. We should get a login prompt from CopSSH.

Once we know that CopSSH is working and reachable via the port forwarded over the VirtualBox NAT, we need to authorize the user accounts that can log in via SSH. This is where we have to pay close attention. The thing that caused me a lot of pain was that the domain and account names are case sensitive. When adding the user account put the domain name in all capitals and pay attention to the case. Check out the screenshots:

One last thing: the user accounts will need the right to log on locally. Double check using the Local Security Policy tool (look in Administrative Tools):

Test logging in with the domain account using domain\user:

If login works, set up the tunnel the same way as in my previous article and rock on!

I’ve been cleaning up a lot of projects lately, and one of the things that I really needed to get sorted out was build scripts for everything so that I had a repeatable way of obtaining correct binary builds of all of my dependencies. Typically I would use msbuild or NAnt to define how the code should be built, but I already had Visual Studio project files for the projects in question and I wanted them to be resilient to having the project files messed with in Visual studio.

Since Visual Studio project files are just custom msbuild definitions supplied by Microsoft, we can build them with msbuild on the commandline. I didn’t want to do any custom tweaking of the files themselves, since I’ve had issues with losing my changes when someone decided to mess with the project file. What I really want out of the project file is the list of the files that need to be built. In the future I’m thinking about devising a way to create a wrapper msbuild project that looks at the parts of the Visual Studio file for the source file list and dependencies and defines its own build targets, but I’m too busy to try to get that working for now.

I’ve had all sorts of issues with trying to tweak msbuild files. Microsoft really came up with a powerful build system, but unfortunately it is hamstrung when trying to work with Visual studio. Some things I wanted to do in the past included getting a version number from the Assemblyinfo.cs file. This turned into a fiasco because of a silly limitation in the scope of property variables within the build targets.

After all of this fiddling I have come to the conclusion that, at least for now, the best way to get msbuild to build a Visual Studio project in a particular way is to manhandle the property settings on the commandline when calling msbuild.

Here is a sample batch file that I use to call msbuild. I want to set the .NET framework version and project configuration (debug/release) along with assertion of some conditional compilation flags used to generate different targeted versions of the code (CRM 4/CRM 2011):

set msbuild=C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe
set PointVersion=1.0.0
set outpathprop=/property:OutputPath=..\release\%PointVersion%\3.5

%msbuild% ^
    /property:Configuration=Debug ^
    /property:TargetFrameworkVersion=v3.5 ^
    %outpathprop%\CRM4\Debug ^
    /p:DefineConstants="CRM4" ^

You can see that I set the output path to include the release version number and the framework version. I could parameterize this even further, but that’s an exercise I’ll leave to the reader or until I refine this technique next time around. Notice that I define a constant called ‘CRM4’ which is responsible for triggering some #ifdefs in the code that include CRM4-specific code. The beauty of this script is that now I can just add a section for each configuration that I need to build, run it once and I’ve got all the debug and release configurations of both CRM4 and 2011 versions of my project.

The file layout that I’ve chosen is:


Effectively, this build script controls the versioning of the build. I got tired of trying to do this by reading Assemblyinfo.cs, but we’ll have to deal with this if we want to build a signed assembly.

The advantage of having a consistent folder structure is that we can now put the built artifacts into an assembly cache and use a dependency management script to copy them to a local lib\ folder under the project directory. We can manage multiple versions of the same assembly and not get completely confused as to which version we are referencing in the project. Let me know about your quick-and-dirty build techniques in the comments!