private override

11Dec/091

Single File Split Buffers in Visual Studio!

Man, I’d searched for this feature time and time again.  And finally found it here: http://www.kevinwilliampang.com/post/Visual-Studio-Split-Views.aspx

If you don’t want to follow the link…

Just double click that guy, or drag it downward, and you’ve split your file into two buffers.  Awesomeness.

split

23Sep/091

Autotest… in .NET

The first time I saw autotest (presented by Anthony), the idea of Continuous Testing captured me.

I live in a .NET world most of the time, and I know of no similar solution for .NET.  It’s been awhile since that first time, and I’ve tinkered here and there trying to get something comparable, but usually come up short.  That is until I found watchr.

Watchr gave me the file change detection capabilities I needed, and the extensibility to do whatever I want when a file has been detected as changed.  This made it incredibly easy to hook up some autotest goodness in my .NET world.

You'll have to have ruby installed, and gems.  Then, the very first thing you'll have to do is

gem install watchr --source=http://gemcutter.org

Here is my watchr script:

require 'autotest.rb'

watch( '^.*UnitTest.*.cs$' ) do |match|
  run_test(match.to_s)
end

This is basically just a regex that says to watch any *.cs files that also contain the string “UnitTest”, and when it finds a change in a file matching that description, call run_test with the matched file name.

So all the magic is in autotest.rb… lets check it out:

require 'rexml/document'

def build(test_project)
  `msbuild /nologo #{test_project}`
end

def mstest(test_container, test_results_file, tests_to_run)
  tests_to_run = ([""] << tests_to_run).flatten

  File.delete(test_results_file) if File.exists?(test_results_file)
  `mstest /nologo /resultsfile:#{test_results_file} /testcontainer:#{test_container} #{tests_to_run.join(" /test:")}`
  test_results = process_mstest_results(test_results_file)
  File.delete(test_results_file) if File.exists?(test_results_file)

  return test_results
end

def process_mstest_results(results_file)
  results = {}
  File.open(results_file) do |file|
    xml = REXML::Document.new(file)

    results[:num_tests] = xml.get_elements("//UnitTestResult").length
    failures = []
    xml.elements.each("//UnitTestResult[@outcome='Failed']") do |e|
      failure = {}
      failure[:message] = e.elements["Output/ErrorInfo/Message"].get_text

      stack = e.elements["Output/ErrorInfo/StackTrace"].get_text.value
      stack_match = /^.*at (.*) in(.*):line (\d+)$/.match(stack)

      failure[:stack] = stack_match[1] if stack_match
      failure[:location] = stack_match[2] if stack_match
      failure[:line] = stack_match[3] if stack_match

      failure[:stack] = stack if !stack_match

      failures << failure
    end
    results[:failures] = failures
  end

  return results
end

def show_results(results)
  puts "#{results[:num_tests]} tests run (#{results[:failures].length} failures)"
  results[:failures].each do |failure|
      puts "---------------------------------------"
      puts "Message: #{failure[:message]}"
      puts "Location: #{failure[:location]}"
      puts "Line: #{failure[:line]}"
      puts "Stack Trace: #{failure[:stack]}"
  end
end

def run_test(file_name)
  test_container = ""
  test_results_file = "result.trx"
  test_project = ""

  system("cls")
  system("echo Detected change in:")
  system("echo   #{file_name}")
  system("echo Building and Testing")

  test_namespace = ''
  test_class = ''
  test_names = []

  File.open(file_name, "r") do |f|
    f.each do |line|
      ns_match = /^namespace (.*)$/.match(line)
      test_namespace = ns_match[1] if ns_match

      class_match = /^\s*public class (.\w*).*$/.match(line)
      test_class = class_match[1] if class_match

      test_name_match = /^\s*public void (\w*).*$/.match(line)
      test_names << test_name_match[1] if test_name_match
    end
  end

  test_names = test_names.map { |n| "#{test_namespace}.#{test_class}.#{n}" }

  build(test_project)
  results = mstest(test_container, test_results_file, test_names)
  show_results(results)
end

The key parts (I think) are the fact that I’m using MSTest to run my tests (this can easily be modified to run your framework of choice… note MSTest is not my choice ;) ).  The result parsing is also specific to the MSTest output format, but should be simple enough for any framework that can output XML. Also, I'm making some assumptions based on my project... we've got one unit test project, so I know I can run tests in a single DLL, and rebuilding only that project, I don't have to worry about choosing the correct project and output dll to build and run tests in.

To get the thing up and running, just run

watchr <path to watchr script>

Please, use/adapt/give feedback/whatever at will.

Go forth and autotest, .NET comrades!

23Aug/096

connect windows mobile 6 Emulator to ActiveSync

This might be more for me, than for anyone else, but I had an issue yesterday trying to connect my Windows Mobile (Classic 6.1.4) Emulator to ActiveSync (4.5) on my machine so I could tweak some files over on the device.

Turns out its pretty simple…

In the File –> Connections dialog check the “Allow connections to one of the following” box.

image

This will allow DMA (which is what it wants)… and it should work magically.

20Apr/091

7 tips for solid client interactions

I recently finished up a project where there was a small team of people from my company, and small teams from two of our local competitors.  My interactions and observations during our ‘full team’ meetings is the context for this post.  Please note that I’m not saying me and my teammates were innocent, some of these observations are about us as well.  These are simply my opinions.

  1. Don’t use your mobile during a client meeting… EVER (unless of course your client asks you to).  I thought this one was a given, but apparently not (you may as well turn off your ringer, SMS notification, voicemail notification, etc. since we’re on the subject).
    • This also goes for your laptop…
  2. Don’t SMS during a client meeting (assumes you’ve already violated number 1)… especially if the client is speaking directly to you! (seriously, I watched this happen more than once)
  3. Technical Presentations
    • Be prepared
      • Have content
      • Know your facts... if you don't, that's okay; please don't make something up
      • Please oh please do not read me your slides; I know how to read
    • Be professional
      • Explain yourself, but don’t be condescending
      • Use appropriate corporate branding (but don’t go over the top)
      • Just because powerpoint has animations, doesn’t mean you have to use them. (I suggest removing ALL animations, transitions, gimmicks, hat-tricks, etc… they all come off to me as amateurish, not professional)
  4. Be professional
    • Don’t talk over someone else just to get your opinion heard, wait patiently, and then respond.
    • Don’t patronize
    • Offer opinions, but be willing to be wrong!
    • Offer opinions, and be willing to back them up!
  5. Speak Up!
    • Speak clearly
    • Speak concisely
    • Have a point (i.e. don’t babble on about nothing)
  6. Listen!  Actively!
  7. Be friendly
30Mar/090

2 amazazing productivity tools

I'm in the middle of writing a bunch of XML API documentation for a prototype I just built.  I'm not really a fan of xml doc-comments, which is why I didn't do it in the first place, but the client wants API documentation, so this is definitely the best way to get it.  The two aforementioned tools?

GhostDoc

GhostDoc basically infers the documentation from the name of the method and its parameter signature.  Absolutely brilliant.  Hook this baby up with a keyboard shortcut, and blam!, it just spits out documentation with a keystroke (which of course is easy to tweak once it's there).  One of the  awesome features is that for implemented methods of an interface, it'll use the exact documentation from the doc-comments from the interface file.  Sweet.

Docu

Docu is sort of like the NDoc of old.  I know that SandCastle exists, but this is so much simpler.  It uses the Spark view engine/templating system, so that means the output is completely customizable.  Right now it comes with a single template, that is heavily inspired by rdoc rather than something like MSDN style (though I'm certain an MSDN style template will be contributed to the project soon).  The project is really young, but it is used already by FluentNHibernate (and was the reason for its inception, really).  Here is the output for the FluentNH project: FluentNH API Docs.

Amazazing.

12Mar/091

Dependency Injection and Service Location

I was having a discussion with a colleague the other day about DI and Service Location in the context of the question (posed by a third person):

Which DI container/framework should I choose?

His answer:

None; just roll your own and use simple Service Location

I'm not meaning to ruffle any feathers, but I couldn't come up with a good case on the spot, but as I've thought about it some more, I completely disagree, respectfully (at least for the reasons that I choose to use a DI container).

What are the reasons I use to decide whether or not to use a DI container?

  • Reduce Coupling
  • Testability
  • Declarative Configuration
  • Testability
  • Rapid Development

Using a simple service locator pattern, with a hand-rolled instantiation mechanism, can get you the first two, but not the last (unless you spend many many hours on your hand-rolled solution, which would most likely turn out just like one of the already existing containers).  If you don't care about declarative configuration, does that mean the using a container vs not using a container is roughly the same?  No.

There are two other major benefits that I get from using a container that are not in the list above, but I feel are important, and help testability.

  • Published/declared dependencies
  • Autowiring

Published dependencies vs implicit dependencies

Below is an illustration of two constructors.  The first uses published dependencies, and the latter uses implicit.

public MovieLister( IMovieFinder finder )

{

    // DI style

    _finder = finder;

}

 

public MovieLister()

{

    // service locator style

    _finder = ServiceLocator.Resolve<IMovieFinder>("CsvMovieFinder", "movies.txt");

}

 

The first is more easily testable and usable, because you know your dependencies up front (published by the constructor signature).  Whereas you don't know unless you look at the implementation which services are needed with the implicit dependency example.  Additionally, with this example, you're tied to the implementation of the service locator, and also to using the CsvMovieFinder key (I suppose this key could be in AppConfig or some other configuration mechanism, so it could actually be configurable).

I agree it can be just as simple to inject mocks into your service locator for testing the implicit example, but I'd rather not have to think about it.  Publishing those dependencies up front makes it easier for the client consuming that object to use and extend, because then the client developer knows exactly what is expected.

Autowiring

When dependencies are published, it allows the container to use autowiring to build up your application instance.  This may sound like hand-waving magic to those of you that haven't seen it in action, but it truly is one of the best features that comes from using a DI container.

The following example shows a simple example of how autowiring can work.

public class Program

{

    private IContainer _container;

 

    public Program()

    {

        // configure container

        _container = new Container();

        _container.Register<MovieLister>();

        _container.Register<IMovieFinder, CsvMovieFinder>("movies.txt");

    }

 

    public void Run()

    {

        // uses autowiring to inject CsvMovieFinder

        // into MovieLister

        MovieLister lister = _container.Resolve<MovieLister>();

 

        lister.List( Console.Out );

    }

}

 

The container builds up a dependency graph for the requested object, and walks it bottom-up, supplying each parent with the required dependencies.  In this example, the container sees the published dependency that the MovieLister has on the IMovieFinder, and automatically instantiates the IMovieFinder it knows about and injects it as it is created.  This case is somewhat trivial since the graph is only one or two levels deep, but I know you've ran into code before where this would be useful.

Service Location with a DI container

Sometimes it is useful to use the service locator pattern, I'm certainly not refuting that.  Its definitely possible, and is usually the easiest way to get introduced to using a DI container actually.  In fact, the last example uses the container in exactly that fashion when pulling the MovieLister out of the container.  The program is using the container as the service location facility.  The key being though, that we're leveraging the robustness of the container to autowire and instantiate anything else it needs rather than requiring each dependency to wire themselves up.

It is also worth mentioning that Microsoft has released a common container interface called the CommonServiceLocator, for helping framework developers abstract their container choice away from client developers, so they can pick whichever container (or roll their own) that they want.

Hopefully this clears something up for someone, and doesn't just muddy the waters.

25Feb/091

Making the ‘using’ statement, more usable

I've felt the pain of having the using statement not being able to return a value before, and just chalked it up to a limitation of the language.

I never considered trying to find a better way, and usually ended up with something like:

void DoSomething()
{
    XmlDocument xml = new XmlDocument();
    using(var reader = new StreamReader(@"c:\somexmlfile.xml"))
    {
        xml.Load(reader);
    }

    // do something useful with xml
}

or

void DoSomething2()
{
    XmlDocument xml = new XmlDocument();
    using( var reader = new StreamReader( @"c:\somexmlfile.xml" ) )
    {
        xml.Load( reader );
        // do something useful with xml
    }
}

Neither of which particularly suited me. Because

  • First example: Declaring the variable outside the scope of the block feels weird
  • Second example: Performing whatever operation inside the using block, means the file (or other resource) doesn't get released until the operation has completed.

After Shawn mentioned something about this at his Indy Alt.Net Clojure talk, I started thinking about it a little bit, and then I had a real need for it (again), and decided to come up with something better.  Turns out what I wanted was blatantly simple.

TResult UsingReturn(TDisposable toUse, Func func)
    where TDisposable : IDisposable
{
    using (toUse)
    {
        return func(toUse);
    }
}

Now the above example turns into:

void DoSomething3()
{
    var xml = UsingReturn( new StreamReader( @"c:\cdf.xml" ), reader =>
                {
                    var doc = new XmlDocument();
                    doc.Load( reader );
                    return doc;
                });
}

I'm not sure where this thing should live (e.g. some static utility class, extension method on IDisposable), or even what it should be called.  Any ideas?

23Feb/090

build automation evolution – CruiseControl.rb and .NET (and TFS!)

Last time we talked about using RAKE instead of NAnt/MSBuild to build .NET projects.  Start there if you're curious, or if you missed that episode.  I'll wait.

I've recently been helping out a colleague to get his build server up and running for his new project.  It has been a major pain.  The source code repository is TFS, the build automation tool is MSBuild, and the CI server is Cruise Control .NET.  While these three are all decoupled from each other (TFS/TeamBuild, does really like MSBuild, however), its a bit of a pain to configure all of the XML, get plugins in the right place, etc. to get everything to work just right.  I've been there, done that myself several times, its doable, but not always the easiest thing to do.

Installing CruiseControl.rb is a breeze, except there is an issue with the latest release when trying to run it on Windows.  So instead of downloading it from the website, I would suggest pulling the repository with git, and using that version instead.  First, install msysgit.

Then you can execute the following command in your console

git clone git://rubyforge.org/cruisecontrolrb.git

The rest of the steps are pretty simple and straightforward:

cruise add projectname -u https://path.to.your.svn.repo/yourproject/trunk
cruise start

That will start up the builder and the dashboard.  The dashboard, by default will live on port 3333, so browse to http://yourmachine:3333 to view your dashboard.  The path you use above should point to the directory where your RAKE file lives, that will make it easiest for CruiseControl.rb to get it right.

It's pretty dead-simple to configure your project's builder too, you get sample configuration by default in your %USERDIR%/.cruise/projects/projectname/cruise_config.rb that you can modify however you want.  Here is what that sample looks like:

# Project-specific configuration for CruiseControl.rb

Project.configure do |project|

  # Send email notifications about broken and fixed builds to email1@your.site, email2@your.site (default: send to nobody)
  # project.email_notifier.emails = ['email1@your.site', 'email2@your.site']

  # Set email 'from' field to john@doe.com:
  # project.email_notifier.from = 'john@doe.com'

  # Build the project by invoking rake task 'custom'
  # project.rake_task = 'custom'

  # Build the project by invoking shell script "build_my_app.sh". Keep in mind that when the script is invoked,
  # current working directory is [cruise data]/projects/your_project/work, so if you do not keep build_my_app.sh
  # in version control, it should be '../build_my_app.sh' instead
  # project.build_command = 'build_my_app.sh'

  # Ping Subversion for new revisions every 5 minutes (default: 30 seconds)
  # project.scheduler.polling_interval = 5.minutes

end

Since it is just ruby code, I find that much more appealing than a big nasty XML configuration file, but I guess that's just my opinion.

Hooking it up to TFS

I live in a TFS world at the office, so I have to play by those rules.  But I learned from Morpheus that "... rules of a computer system... can be bent. Others can be broken."  I'm just bending them.

The SvnBridge provides this rule bending behavior, by letting your subversion clients talk to your TFS repository, thereby allowing CruiseControl.rb to poll against what it thinks is a Subversion repository, but is actually a TFS repository.  It's really simple to checkout your TFS repository against it using your favorite SVN repository, so I won't go into it here.

Build Outputs

The one thing I wanted to make super simple was putting things in the right place for build outputs.  Turns out this is fairly simple after a few minutes with the docs.  CruiseControl.rb sets an environment variable telling you where to put things.  I abstracted a getter over top of it, so I can conditionally pick a different output location if I'm not inside of a CruiseControl.rb build.  It looks like this:

def output_dir
  if ENV.keys.include?('CC_BUILD_ARTIFACTS')
    return ENV['CC_BUILD_ARTIFACTS']
  else
    return 'results'
  end
end

As mentioned last time, you can see the latest version of the full RakeFile mentioned above here: http://jonfuller.googlecode.com/svn/trunk/code/CoreLib/RakeFile

I'm feeling some serious CI/Build Automation bliss, hopefully this will get you on your way there too!

23Feb/094

Re: How do you stay AND grow? A commentary

This post started out as a comment to How do you stay AND grow?, but it got a little lengthy for a comment so I decided to write a post instead.

The original post, was in response to Uncle Bob's Multi-dimensional Seniority.  The following is my response to Daniel:

I think the metaphor of apprentice -> journeyman -> craftsman breaks down a bit when we get to talking about the fact that each is "mentored" under only one.  While that is true in the skilled trades world (and mostly holds true in the software world as well, as far as technical mentoring), the software world is so much more than technical talent and ability.  I'm sure you've read Peopleware, and are familiar with this idea.  That being said, getting exposed to lots of different people over various organizations will up your game in the arena of interpersonal relations (I know it has for me).  Now, I'm not saying you can't also grow these skills from within a company (I know I certainly did at my previous employer/Daniel's employer, more than I ever knew myself to be capable), but now that I'm with a totally different set of folks, I'm learning totally different skills, which I don't think would've ever happened had I not changed companies.  However, I'd be willing to bet with a sufficiently heterogeneous set of technical teams in a given company, you would be able to achieve the same result.  Technical teams will not only differ based on their preferred technology stacks, but each stack usually draws a unique set of interpersonal skills as well (e.g. "The Ruby Community is so helpful and nice to noobs like me!" or "The Java Group are a bunch of haters, and are too good to answer silly beginner questions").

To answer the question of

Can a company foster an environment where its developers get exposed to different technologies, development environments, languages, etc. so they don’t have to leave the company to do that?

I like what Google has done to infuse this learning and growing with their 20% time.  20% time is ALOT of time dedicated solely to learning/growing, but I'll bet you spend about that much time now, or at least somewhere between 10 and 20.  We also have a program to take classes at work (real homework, real projects - just like school), taught by your colleagues.  Almost invariably, the classes are on technologies/languages that we don't currently use in production.

The last question Daniel poses is interesting.

If a company desperately wanted to do whatever it needed to to keep their developers growing without having to leave the company, what would that company need to do?

I doubt you'll find many, if any even, companies whose goal is to do whatever it takes to keep their developers growing and not leaving.  As much as I love the idea of software craftsmanship, I can't imagine myself being the owner of a software shop, and having that be my primary goal.  Possibly, I suppose, because I have no problem with packing up and leaving when something better comes along.  Don't get me wrong, I have company loyalty, but a number of things usurp that (in my book)... have you read Who Moved My Cheese?

Maybe my real question is why do you (read: y'all) need to stay?

23Feb/095

build automation evolution – Rake and .NET

When I first started with build automation, I started out with NAnt.  I loved NAnt. NAnt loved me. We were happy.  I could program anything with the NAnt XML goodness.  If there wasn't a function or task to do what I wanted, I simply wrote one, compiled it,  and wrote some more XML; it couldn't be any more simple!  I think the part I liked the most was the instant gratification I had with being able to automate something that would/could not otherwise be automated [at least not in a simple manner] with a little bit of XML programming.

Soon after NAnt gained popularity, MSBuild was released from Microsoft, which eventually effectively squashed NAnt (IMHO, no stats to back this up).  We never migrated our scripts over to MSBuild because we had significant investment in NAnt already, but it wasn't hard to shell off to MSBuild to compile our solutions.  Eventually I worked on a new project (at a new company) and needed to learn how to use MSBuild since we were using TFS on that project.

Shortly after I started integrating MSBuild into NAnt, and then started learning MSBuild, I started feeling a twinge.  Now that automation is a given, I need something more than programming in this extremely limited XML environment.  Sure, I can write a new MSBuild task just like I did in NAnt, but is it worth it?  My answer is an emphatic no.  I need a great user experience.  Something that feels nice AND is powerful.

Enter RAKE.

It sounds like MAKE; if it looks and feels like MAKE, I might vomit!  No thanks!

Glad you brought that up, Dear Reader (If Hanselman can reference you like that, I can too).  It's not really like MAKE.  In fact, the things you do inside of a RAKE file, is write Ruby code!  RAKE really gives you a nice [internal] DSL for automating tasks.  If there is something you want to do that isn't built in, write a little ruby code to do it.  No compilation and putting the dll in the write place, etc. etc.  Programming in Ruby vs. XML... now that feels nice (requirement #1 above).

But wait!  RAKE is for building Ruby and Rails apps, we can't possibly use it for .NET!

RAKE, just like Ant, NAnt or MSBuild, is a general purpose, task based automation tool.  It may be written in Ruby, but it can build .NET solutions (with the help of MSBuild), Java projects (with the help of Ant or Maven), or Flex, or whatever.  I call that powerful (requirement #2 above).

Please note I'm not claiming to be the first person to do this in .NET, I've found lots of other guys doing it too.

Here is an example of my first rake script for .NET (some pieces borrowed heavily from the Fluent NH guys... thanks!).

Enjoy/Discuss.

An always updated version of this file can be found here: http://jonfuller.googlecode.com/svn/trunk/code/CoreLib/RakeFile

require "BuildUtils.rb"
include FileTest
require 'rubygems'
gem 'rubyzip'
require 'zip/zip'
require 'zip/zipfilesystem'

#building stuff
COMPILE_TARGET = "debug"
CLR_VERSION = "v3.5"
SOLUTION = "src/CoreLib.sln"
MAIN_PROJECT = "CoreLib"

# versioning stuff
BUILD_NUMBER = "0.1.0."
PRODUCT = "CoreLib"
COPYRIGHT = "Copyright © 2009 Jon Fuller"
COMPANY = "Jon Fuller"
COMMON_ASSEMBLY_INFO = "src/CommonAssemblyInfo.cs"

desc "Compiles, tests"
task :all => [:default]

desc "Compiles, tests"
task :default => [:compile, :unit_test, :package]

desc "Update the version information for the build"
task :version do
  builder = AsmInfoBuilder.new BUILD_NUMBER,
    :product   => PRODUCT,
    :copyright => COPYRIGHT,
    :company   => COMPANY
  builder.write COMMON_ASSEMBLY_INFO
end

desc "Prepares the working directory for a new build"
task :clean do
  Dir.mkdir output_dir unless exists?(output_dir)
end

desc "Compiles the app"
task :compile => [:clean, :version] do
  MSBuildRunner.compile :compilemode  => COMPILE_TARGET,
    :solutionfile => SOLUTION,
    :clrversion   => CLR_VERSION
end

desc "Runs unit tests"
task :unit_test => :compile do
  runner = NUnitRunner.new :compilemode => COMPILE_TARGET,
    :source       => 'src',
    :tools        => 'tools',
    :results_file => File.join(output_dir, "nunit.xml")
  runner.executeTests Dir.glob("src/*Test*").map { |proj| proj.split('/').last }
end

desc "Displays a list of tasks"
task :help do
  taskHash = Hash[*(`rake.cmd -T`.split(/\n/).collect { |l| l.match(/rake (\S+)\s+\#\s(.+)/).to_a }.collect { |l| [l[1], l[2]] }).flatten] 

  indent = "                          "

  puts "rake #{indent}#Runs the 'default' task"

  taskHash.each_pair do |key, value|
    if key.nil?
      next
    end
    puts "rake #{key}#{indent.slice(0, indent.length - key.length)}##{value}"
  end
end

desc "Packages the binaries into a zip"
task :package => :compile do
  source_files = Dir.glob("src/#{MAIN_PROJECT}/bin/#{COMPILE_TARGET}/**/*")
  dest_files = source_files.map{ |f| f.sub("src/#{MAIN_PROJECT}/bin/#{COMPILE_TARGET}/", "#{MAIN_PROJECT}/")}
  Zip::ZipFile.open(File.join(output_dir, "#{MAIN_PROJECT}.zip"), 'w') do |zipfile|
    0.upto(source_files.size-1) do |i|
        puts "Zipping #{source_files[i]} to #{dest_files[i]}"
        zipfile.add(dest_files[i], source_files[i])
    end
  end
end

def output_dir
  if ENV.keys.include?('CC_BUILD_ARTIFACTS')
    return ENV['CC_BUILD_ARTIFACTS']
  else
    return 'results'
  end
end