We're Published

Posted by Nicole Hughes Sun, 06 Nov 2005 05:14:00 GMT

Microsoft published a case study on our Gateway project at Worldspan. Check it out.

I am the lead on this project, so the study comes complete with my own cheesy quotes. Being referred to as "Hughes" in published articles is new for me, but definitely something I could get used to.

Microsoft released Whidbey last week, so now it is available to the masses. Here's a list of breaking changes in .NET Framework 2.0.

I hope you enjoy the new compiler as much as we have (hopefully even more so, since you don't have to deal with the bugs we ran into alpha and beta)!

Posted in .NET

To Strong Name or Not to Strong Name

Posted by nhughes Sat, 09 Jul 2005 04:08:00 GMT

Awhile back, I did some research on strong naming (ie signing) and the GAC. I thought it might be an interesting post. We ultimately signed all of our assemblies.

When strong names MUST be used:

  1. Any assemblies loaded into the GAC must have strong names.
  2. For a .NET class to take advantage of Enterprise Services (COM+ Services), such as distributed transactions and object pooling, the assembly that contains the class—called a Serviced Component because it inherits from the class EnterpriseServices.ServicedComponent—must have a strong name.
  3. For a .NET component to be used by a non-.NET component. (This is the reason we ultimately signed all of our assemblies, since we have a component that must be called by a VB 6.0 app).

Advantages of using strong names:

  1. Versioning: Eliminates DLL Hell
    - Developers can uniquely identify versions of .NET assemblies.
    - Different versions of the same assembly can run side-by-side.
  2. Authentication
    - Strong names ensure the code is provided by the publisher.
    - Strong names can be used with code groups to provide levels of access to the assembly. For instance, Admins and Developers can use strong names with code groups to provide assemblies with higher permissions than the default .NET framework permission level.
  3. Binary Integrity
    - The CLR can tell from the signing of the assembly whether it has been tampered with since it was last compiled.
  4. Potentially fixes a problem where shared assemblies get added multiple times in a Microsoft Setup Project. This may not be necessary with Beta 2, though.

Disadvantages to using strong names:

  1. Creates additional work to go through and sign every assembly in a project (and every assembly referenced, like Enterprise Library Assemblies).
  2. When you sign assemblies, the CLR first looks for the assembly reference in the GAC and if it cannot find it, it then probes for it. This could be inefficient for your processes if you have no assemblies in the GAC.
  3. Signing the assemblies will bring tighter security, and may cause a security issues between assemblies, requiring more troubleshooting time to figure out the problems.

Note: If you decide to sign your assemblies and you are using Enterprise Library, you will have to ensure that every assembly you use is referencing the signed copy of Enterprise Library. This gave us fits.

Posted in .NET

Encryption Update

Posted by nhughes Fri, 08 Jul 2005 05:30:00 GMT

I've been a huge slacker with respect to updating this blog. So, it's about time I gave some results from our encryption changes.

Prior to any encryption modifications (see previous two blog entries), we had the CPU maxed-out with about 900 encrypted connections each sending and receiving one message per second.

After making the RC2 and RSA changes suggested by Microsoft, we saw a 15-20% improvement. So, we could connect 900 users using 80% of the CPU. Granted, we've still got obstacles to overcome with performance in this area, but this is a marked improvement.

Plus, once we changed the test to send and receive 1 message every 5 seconds (which is more realistic for our customer-base), we were able to get 1600 encrypted users running at about 50% CPU-utilization. We may be able to live with that for alpha testing, at least.

Posted in .NET

RC2 Performance Enhancements

Posted by nhughes Mon, 13 Jun 2005 10:26:00 GMT

Since our app is still experiencing a severe bottle-neck with encryption, I requested some help from Microsoft to suggest improvements. While they had no suggestions for our problems with RSA efficiency, they did have two suggested changes to our RC2 implementation.

  1. I originally implemented our RC2 processing to generate a new session key for each data message sent to the client. They suggested, instead, that I reuse the session key throughout the life of the session because calling the RC2CryptoServiceProvider constructor is an expensive operation. (The RC2 session key is encrypted with RSA, so that will ensure its security.) NOTE: Encryption with RC2 is not thread safe, so you will have to lock your encryption/decryption code if you are going to use the same key through the session.
  2. I was using a CryptoStream to get encrypted and decrypted data from RC2, like this:

Decrypt Code:

lock (this)
{
    // Note:  The sessionKey is a byte[] read from the client.
    // The iv is a byte[]  of 0x00's to comply with the 
    // Microsoft Crypto Service Provider specs.

    ICryptoTransform decryptor 
        = rc2Decryptor.CreateDecryptor(sessionKey, iv);

    // Decrypt the data
    MemoryStream msDecrypt 
        = new MemoryStream(encryptedData.GetSegment());

    CryptoStream csDecrypt 
        = new CryptoStream(msDecrypt, decryptor, 
                           CryptoStreamMode.Read);

    int dataLen = encryptedData.Count;

    byte[] decryptedData = new byte[dataLen];

    //Read the data out of the crypto stream.
    int bytesRead = csDecrypt.Read(decryptedData, 0, dataLen);
}

Encrypt Code:

MemoryStream msEncrypt = new MemoryStream();

lock (this)
{
    // Encrypt the data.
    // Note:  encryptor was initialized earlier.
    // It is an ICryptoTransform that holds our session
    // key, which is held for the duration of the session.
    CryptoStream csEncrypt 
        = new CryptoStream(msEncrypt, encryptor, 
                           CryptoStreamMode.Write);

    //Write all data to the crypto stream and flush it.
    csEncrypt.Write(data.GetSegment(), data.Offset, data.Count);
    csEncrypt.FlushFinalBlock();
}

However, our team found that the streams were causing efficiency problems. So, Microsoft's suggestion was to use the TransportFinalBlock method on the ICryptoTransform interface and get rid of the Memory and Crypto streams. They claim it is a more efficient use of memory. So, the code now looks like this:

Decrypt Code:

byte[] decryptedData;

lock (this)
{
    ICryptoTransform decryptor 
        = rc2Decryptor.CreateDecryptor(sessionKey, iv);

    // Decrypt the data
    decryptedData 
        = decryptor.TransformFinalBlock(encryptedData.Array,
            encryptedData.Offset, encryptedData.Count);
}

Encrypt Code:

byte [] encryptedData;

lock (this)
{
    //Encrypt the data.
    encryptedData 
        = encryptor.TransformFinalBlock(data.Array, 
            data.Offset, data.Count);
}

We have implemented both of these suggestions are beginning our load test and profiling again to determine if they made a difference. I'll update the blog with the results when we have them.

By the way, in case you are curious about RSA... Microsoft stated that RSA encryption is slower pre-Windows XP because direct RSA encryption was not supported in the Crypto API. Unfortuanately, that doesn't explain our RSA problems, since we are on Windows 2003 server.

Posted in .NET

RSA encryption between .NET and Win32 made easier!

Posted by nhughes Tue, 31 May 2005 03:55:00 GMT

Today we have a Win32 client (written in C++ and COM) that will communicate over TCP/IP to our new .NET server-side code. The Win32 client won’t be ported to .NET anytime in the near future. So, the two code bases have to be compatible. This challenge is amplified by the fact the client uses a combination of asymmetric encryption (RSA) and symmetric encryption (RC2) to encrypt the data that will be sent over the socket. So, our .NET code must be able to decrypt that data and then encrypt the response to be sent back to the client.

This quickly became a challenge because the two platforms implement encryption very differently. In .NET, I ended up having to parse the RSA public key from the client. This process included parsing the RSA key header, the exponent and modulus out of the key blob to be read into the RSACryptoServiceProvider. Also, because the .NET default for byte ordering is big-endian and Win32 is little-endian, the exponent and modulus byte ordering had to be reversed before it could be read by .NET. This is quite a bit of overhead. Then, this whole process had to be done again (only in reverse) for creating a .NET key that a Win32 client can read. Suffice it to say, it took quite a bit of trial and error to get it all correct, and it was a big pain.

This is no longer necessary with Whidbey Beta 2!! There are new methods on the RSACryptoServiceProvider for importing and exporting the key blob (ImportCSPBlob & ExportCSPBlob). No more parsing, no more byte order reversing! It is a very good thing. I only wish it had been available earlier, so I didn’t spend so much time doing it the hard way.

Now, if only Microsoft would add this for the RC2CryptoServiceProvider!

Posted in .NET

SQL Code no-no's

Posted by nhughes Thu, 12 May 2005 22:32:00 GMT

Good Advice. Check out these 10 things you should not do with SQL server in .NET.

Posted in .NET

New Memory Allocation with MemoryStream.ToArray()

Posted by nhughes Thu, 05 May 2005 09:43:59 GMT

If you find that you need to reduce your frequent memory allocations and deallocations, try to reduce or eliminate your use of MemoryStream.ToArray(). ToArray() copies the memory stream buffer into a new byte array. You might consider using MemoryStream.GetBuffer() instead. This method returns the byte array inside the MemoryStream, instead of making a copy.

Note: GetBuffer() returns the entire array that was originally allocated, which is probably much larger than the data you wrote. You'll need to use the MemoryStream's Length property to pick your data out of the array.

Posted in .NET

Stack Imbalance due to InterOp.

Posted by nhughes Tue, 03 May 2005 07:20:00 GMT

Don't get bitten by this invisible bug.

If your .NET application has an interop to managed code and the managed code calls back into your .NET code (using delegates), make sure you have the calling conventions of the delegate and the interop methods defined the same.

If you don’t do this, you will cause a stack imbalance, which may go unnoticed. Errors of this type seem to only be reported by Managed Debugging Assistant in Visual Studio 2005 Team Suite Beta 2 (not in VS 2005 Beta 2 standard). The error shows up as:

A call to PInvoke function has unbalanced the stack. This is likely because the managed Invoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature.

To resolve this, make sure the delegate calling convention matches the calling convention of the interop methods. For example, here is the definition of an interop method on a dll, whose calling convention is cdecl:

     [DllImport("Implode.dll", CallingConvention = CallingConvention.Cdecl)]
     public extern static uint implode(
          PKCompression.ReadMemoryEventHandler ReadMemDelegate,
          PKCompression.WriteMemoryEventHandler WriteMemDelegate,
          IntPtr work_buf,
          ref PKCompression.CompressionMemoryParameters Param,
          ref uint type, ref uint dsize);

The delegate that this implode.dll calls back into should also define this same cdecl calling convention. Like this:

     [UnmanagedFunctionPointerAttribute(CallingConvention.Cdecl)]
     internal delegate void WriteMemoryEventHandler(
          IntPtr buff, ref uint size, 
          ref CompressionMemoryParameters memoryParameters);

Posted in .NET

Microsoft's Suggestions for .NET Application Improvements

Posted by nhughes Mon, 02 May 2005 07:42:00 GMT

Back in February 2005, my development group visited the Whidbey TAP lab in Redmond, WA. We met with quite a few of the groups one-on-one, such as Patterns and Practices (aka Platform Architecture Guidance), Visual C++, ADO.NET, threading, and others. Here are the suggestions from Microsoft compiled while we were there:

Threading

  • Do not use System.Timer in your applications, instead use System.Threading.Timer. System.Timer does not use the thread pool and is less efficient.
  • Instead of using a Timer for counting time in your application, a StopWatch is more appropriate and more efficient.

Enterprise Instrumentation vs. Enterprise Library

If you are using Enterprise Instrumentation for logging, STOP IT! Look into the new, and majorly improved Enterprise Library. It requires less work and is less error prone. When our application was using Enterprise Instrumentation, we had major memory leaks. Since the implementation of Enterprise Library, the application's performance has improved 100%. See this blog for more info on Enterprise Library.

Visual C++ - Ressurected

There are 3 situations in which you might use Visual C++ rather than C#:

  • If the developers are used to C++ and would like to continue its use, rather than ramp up on a new technology.
  • If the application uses complicated or multiple InterOps to native code.
  • If you want a lower-level of control in your development. For instance, the ability to override a method with a different name is available in C++ but not in C#.

They've even implemented the STL in .NET.

Posted in .NET

What this is all about.

Posted by nhughes Mon, 02 May 2005 07:02:00 GMT

Non-Technical Folks, Beware. This site may read as "blah, blah, blah, Microsoft, blah, blah" to you.

I'm a software development lead for Worldspan L.P, a company providing software technology to the travel industry. My group is part of the Whidbey (ie. Visual Studio 2005) Technical Acceptance Program. So, this means we are the front-runners with the new IDE and .NET Framework 2.0. We've experienced the pain, hardships, and excitement of running the Beta 1 code and recently converted to Beta 2 (ahh, so much better!).

My goal for this blog site, at least in the short term, is to provide interesting information and helpful hints to all those interested in Microsoft .NET. I want to post about hurdles we've overcome whose solutions exist nowhere else on the web. So, some of the information will probably be pretty esoteric.

Enjoy!

Posted in .NET