.NET, Sitecore and setup development

This blog is used as a memory dump of random thoughts and interesting facts about different things in the world of IT.

ERR_CONNECTION_REFUSED

Today I have faced with the problem installing Basic Authentication feature into the Web Server role on Windows 2012 R2. The wizard kept throwing various errors, including scary OutOfMemoryException. A quick googling that has found a suggestion to run netsh http show iplisten and add 127.0.0.1 (aka Home Sweet Home) to the list if it’s not there. I gave it a try without giving it a thought first.

The initial problem has not been solved – the wizard kept failing to add that feature, and I finally resolved it with the mighty PowerShell:

1
2
Import-Module ServerManager
Add-WindowsFeature Web-Basic-Auth

Later on I had to browse for a website hosted on that server, and I suddenly saw This webpage is not available message. Hmm… First off, I’ve verified that the website works locally – and it did. This gave me another hint and I checked whether the bindings are set up correctly. And they were! Finally, I started to think that it’s Basic Authentication feature to blame – yeah, I know, that was a stupid assumption, but hey, stupid assumptions feel very comfortable for brain when it faces with the magic…

Anyway, fortunately I recalled that quick dumb action I did with netsh, and the magic has immediately turned into the dust, revealing someone’s ignorance… Turns out, if iplisten does not list anything, it means listen to everything, any IP address. When you add something there, it starts listening to that IP address only.

Thus, it was all resolved by deleting 127.0.0.1 from that list with netsh http delete iplisten ipaddress=127.0.0.1.

Want some quick conclusion? Think first, then act!!!

Written with StackEdit.

Build Queue Starvation in CC.NET

Recently I’ve come across an interesting behavior in CruiseControl.NET in regards to the build queues and priorities.

If there are many projects on one server, and the server it not quite powerful, and more than one build queue is configured, and (that’s the last one) these build queues have different priorities, you might end up in a situation when CC.NET checks for modifications the same set of projects all over again, and never starts an actual build. If you add the projects from that server to the CCTray, you can observe the number of projects queued for the build has reached a certain number and never decreases.

This phenomenon is call “build queue starvation”. It was described and explained by Damir Arh in his blog.

Let me summarize the main idea.

When one build queue has a higher priority than another queue, CC.NET favors the projects from the first queue when scheduling for modifications check. Now imagine that a trigger of the projects from the higher priority queue is quite small and the number of such projects is big enough. This leads to the situation when the first project in high priority queue is scheduled for build the second round before the last project in that queue has built its first time.

As a result, the lower priority queue is “starving” – none of its projects ever gets a chance to be built. The fix suggested in the link above suits my needs – the trigger interval has just been increased.

I should say it’s not easy to google that if you’re not familiar with the term “build queue starvation”. Besides, CC.NET doesn’t feel bad in this situation, and hence doesn’t help with any warnings – it just does its job iterating the queue and following the instructions.

Written with StackEdit.

Setting Up an Existing Blog on Octopress

Ok, it took me some time and efforts to set up the environment for blogging. Consider this post as a quick instruction to myself for the next time I’ll have to do this.

So, there’s an existing blog created with Octopress, hosted on Github. The task is to setup a brand new machine to enable smooth blogging experience.

Note: just in case you have to create a blog from scratch, follow the official Octopress docs, it’s quite clear.

First of all, you should install Ruby. Octopress docs recommend using either rbenv or RVM for this. Both words sound scary, hence don’t hesitate to take the easy path and download an installer from here. At the last page of the installation wizard, choose to add Ruby binaries to the PATH:

When installer completes, check the installed version:

ruby --version

Then, clone the repo with the blog from Github. Instead of calling rake setup_github_pages as suggested by the Octopress docs, follow these steps found here. Let’s assume we’ve done that into blog folder:

git clone git@github.com:username/username.github.com.git blog
cd blog
git checkout source
mkdir _deploy
cd _deploy
git init
git remote add origin git@github.com:username/username.github.com.git
git pull origin master
cd ..

Now do the following:

gem install bundler
bundle install

This should pull all the dependencies required for the Octopress engine. Here’s where I faced with the first inconsistency in the docs – one of the dependencies (fast-stemmer) fails to install without the DevKit. Download it and run the installer. The installation process is documented here, but the quickest way is:

  • self-extract the archive
  • cd to that folder
  • run ruby dk.rb init
  • then run ruby dk.rb install

After this, re-run the bundle install command.

Well, at this point you should be able to create new posts with rake new_post[title] command. Generate the resulting HTML with rake generate and preview it with rake preview to make sure it produces what you expect.

An important note about syntax highlighting

Octopress uses Pygments to highlight the code. This is a Python thing, and obviously you should install Python for this to work. Choose 2.x version of Python – the 3.x version doesn’t work. This is important: you won’t be able to generate HTML from MARKDOWN otherwise.

That’s it! Hope this will save me some time in future.

And by the way, this all is written with StackEdit – a highly recommended online markdown editor.

Migrate Attachments From OnTime to TFS

When you move from one bug tracking system to another, the accuracy of the process is very important. A single missing point can make a work item useless. An attached image is often worth a thousand words. Hence, today’s post is about migrating attachments from OnTime to TFS.

NOTE: The samples in this post rely on OnTime SDK, which was replaced by a brand new REST API.

OnTime SDK is a set of web services, and each “area” is usually covered by one or a number of web services. The operations with attachments are grouped in /sdk/AttachmentService.asmx web service.

So, the first thing to do is to grab all attachments of the OnTime defect:

1
var rawAttachments = _attachmentService.GetAttachmentsList(securityToken, AttachmentSourceTypes.Defect, defect.DefectId);

This method returns a DataSet, and you’ll have to enumerate its rows to grab the useful data:

1
2
3
4
5
6
var attachments = rawAttachments.Tables[0].AsEnumerable();
foreach (var attachment in attachments)
{
  // wi is a TFS work item object
  wi.Attachments.Add(GetAttachment(attachment));
}

Now, let’s take a look at the GetAttachment method, which actually does the job. It accepts the DataRow, and returns the TFS Attachment object:

1
2
3
4
5
6
7
8
9
10
11
private Attachment GetAttachment(DataRow attachmentRow)
{
  var onTimeAttachment = _attachmentService.GetByAttachmentId(securityToken, (int)attachmentRow["AttachmentId"]);

  var tempFile = Path.Combine(Path.GetTempPath(), onTimeAttachment.FileName);
  if (File.Exists(tempFile))
    File.Delete(tempFile);
  File.WriteAllBytes(tempFile, onTimeAttachment.FileData);

  return new Attachment(tempFile, onTimeAttachment.Description);
}

Couple of things to notice here:

  • you have to call another web method to pull binary data of the attachment
  • OnTime attachment metadata is rather useful and can be moved as is to TFS, for instance, attachment description

Finally, when a new attachment is added to the TFS work item, “increment” the ChangedDate of the work item before saving it. The TFS server often refuses saving work item data in case the previous revision has exactly the same date/time stamp. Like this (always works):

1
2
wi[CoreField.ChangedDate] = wi.ChangedDate.AddSeconds(5);
wi.Save();

Hope it’s useful. Good luck!

NAnt Task Behaves Differently in 0.92 and Prior Versions

If you need to copy a folder together with all its contents to another folder in NAnt, you would typically write something like this:
<copy todir="${target}">
<fileset basedir="${source}" />
</copy>
It turns out this code works correctly in NAnt 0.92 Alpha and above. The output is expected:
[copy] Copying 1 directory to '...'.
However, the same code doesn’t work in prior versions of NAnt, for instance, 0.91. The output is as follows (only in –debug+ mode):
[copy] Copying 0 files to '...'.
Obviously, the issue was fixed in 0.92, so the best recommendation would be to upgrade NAnt toolkit. However, if this is not an option for some reason, the following code seems to work correctly for any version:
<copy todir="${target}">
<fileset basedir="${source}">
<include name="**/*" />
</fileset>
</copy>
Hope this saves you some time.

Possible Source of the Signtool ‘Bad Format’ 0x800700C1 Problem

Today I have faced with a weird problem. The operation to sign the EXE file (actually, an installation package) with a valid certificate failed with the following error:
[exec] SignTool Error: SignedCode::Sign returned error: 0x800700C1
[exec] Either the file being signed or one of the DLL specified by /j switch is not a valid Win32 application.
[exec] SignTool Error: An error occurred while attempting to sign: D:\output\setup.exe
This kind of error is usually an indication of a format incompatibility, when the bitness of the signtool.exe and the bitness of the EXE in question don’t correspond. However, this was not the case.

It turns out that the original EXE file was generated incorrectly because of the lack of disk space. That’s why it was broken and was recognized by the signtool like a bad format file. After disk cleanup everything worked perfectly and the EXE file was signed correctly.

Hope this saves someone some time.

A Solution Can Build Fine From Inside the Visual Studio, but Fail to Build With msbuild.exe

Today I have faced with an interesting issue. Although I failed to reproduce it on a fresh new project, I think this info might be useful for others.
I have a solution which was upgraded from targeting .NET Framework 2.0 to .NET Framework 3.5. I’ve got a patch from a fellow developer to apply to one of the projects of that solution. The patch adds new files as well as modifies existing ones. After the patch application, the solution is successfully built from inside the Visual Studio, but fails to build from the command line with msbuild.exe. The error thrown states that
“The type or namespace name 'Linq' does not exist in the namespace 'System' ”. 
The msbuild version is 3.5:
[exec] Microsoft (R) Build Engine Version 3.5.30729.5420
[exec] [Microsoft .NET Framework, Version 2.0.50727.5456]
[exec] Copyright (C) Microsoft Corporation 2007. All rights reserved.
It turns out this issue has been met by other people, and even reported to Microsoft. Microsoft suggested to use MSBuild.exe 4.0 to build VS 2010 projects. However, they confirmed it is possible to use MSBuild.exe 3.5  - in this case a reference to System.Core (3.5.0.0) must be explicitly added to the csproj file.
If you try to add a reference to System.Core from inside the Visual Studio, you’ll get the error saying:
"A reference to 'System.Core' could not be added. This component is already automatically referenced by the build system"
So, it seems that when you build a solution from inside the Visual Studio, it is capable to automatically load implicitly referenced assemblies. I suppose, MSBuild.exe 4.0 (and even SP1-patched MSBuild.exe 3.5?) can do this as well. Apparently, this has also turned out to be a known problem – you can’t add that reference from the IDE. Open csproj file in your favorite editor and add this:
<Reference Include="System.Core" />
After this, the project builds fine in both VS and MSBuild.

Default Attribute Values for Custom NAnt Tasks

When you create custom NAnt tasks, you can specify various task parameter characteristics, such as whether it is a required attribute, how it validates its value, etc. This is done via the custom attributes in .NET, for example:
[TaskAttribute("param", Required = true), StringValidator(AllowEmpty = false)]
public string Param { get; set; }
It might be a good idea to be able to specify a default value for a task parameter the similar way, for instance:
[TaskAttribute("port"), Int32Validator(1000, 65520), DefaultValue(16333)]
public int Port { get; set; }
Let’s examine the way it can be implemented. First of all, let’s define the custom attribute for the default value:
/// <summary>
/// The custom attribute for the task attribute default value
/// </summary>
public class DefaultValueAttribute : Attribute
{
public DefaultValueAttribute(object value)
{
this.Default = value;
}

public object Default { get; set; }
}
I suppose the standard .NET DefaultValueAttribute can be used for this purpose as well, but the one above is very simple and is good for this sample. Note also that in this situation we could benefit from the generic custom attributes, which unfortunately are not supported in C#, although are quite valid for CLR.

Now, when the attribute is defined, let’s design the way default values will be applied at runtime. For this purpose we’ll have to define a special base class for all our custom tasks we’d like to use default values technique:
public abstract class DefaultValueAwareTask : Task
{
protected override void ExecuteTask()
{
this.SetDefaultValues();
}

protected virtual void SetDefaultValues()
{
foreach (var property in GetPropertiesWithCustomAttributes<DefaultValueAttribute>(this.GetType()))
{
var attribute = (TaskAttributeAttribute)property.GetCustomAttributes(typeof(TaskAttributeAttribute), false)[0];
var attributeDefaultValue = (DefaultValueAttribute)property.GetCustomAttributes(typeof(DefaultValueAttribute), false)[0];

if (attribute.Required)
{
throw new BuildException("No reason to allow both to be set", this.Location);
}

if (this.XmlNode.Attributes[attribute.Name] == null)
{
property.SetValue(this, attributeDefaultValue.Default, null);
}
}
}

private static IEnumerable<PropertyInfo> GetPropertiesWithCustomAttributes<T>(Type type)
{
return type.GetProperties(BindingFlags.DeclaredOnly | BindingFlags.Public | BindingFlags.Instance).Where(property => property.GetCustomAttributes(typeof(T), false).Length > 0);
}
}
Let’s examine what this code actually does. The key method here is SetDefaultValues(). It iterates through the task parameters (the public properties marked with DefaultValueAttribute attribute) of the class it is defined in and checks whether the value carried by the DefaultValueAttribute should be set as a true value of the task parameter. It is quite simple: if the XmlNode of the NAnt task definition doesn’t contain the parameter in question, it means a developer didn’t set it explicitly, and it is necessary to set a default value. Moreover, if the task parameter is marked as Required and has a default value at the same time, this situation is treated as not appropriate and the exception is thrown.

Obviously, when a custom NAnt task derives from the DefaultValueAwareTask, it has to call base.ExecuteTask() at the very start of its ExecuteTask() method implementation for this technique to work.

Generate a Solution File for a Number of C# Projects Files in a Folder

Some time ago I wrote my first T4 template which generates a solution (*.sln) file out of a number of C# project (*.cspoj) files, located in a folder and all descendants. Although it turned out not to be necessary to solve the task I was working on, and assuming it’s quite simple, I still decided to share it for further reference. May someone can find it useful. So, below is the entire T4 template, with no extra comments:
Microsoft Visual Studio Solution File, Format Version 11.00
# Visual Studio 2010
<#@ template language="cs" hostspecific="false" #>
<#@ output extension=".sln" #>
<#@ parameter name="Folder" type="System.String" #>
<#@ assembly name="System.Core" #>
<#@ assembly name="System.Xml" #>
<#@ assembly name="System.Xml.Linq" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Xml.Linq" #>
<#
if (Directory.Exists(Folder))
{
var csprojFiles= Directory.GetFiles(Folder, "*.csproj", SearchOption.AllDirectories);
foreach (var file in csprojFiles)
{
ProjectFileMetaData metaData = new ProjectFileMetaData(file, Folder);
WriteLine("Project(\"{3}\") = \"{0}\", \"{1}\", \"{2}\"", metaData.Name, metaData.Path, metaData.Id, ProjectFileMetaData.ProjectTypeGuid);
WriteLine("EndProject");
}
}
#>

<#+
public class ProjectFileMetaData
{
public static string ProjectTypeGuid = "{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}";

public ProjectFileMetaData(string file, string root)
{
InitProperties(file, root);
}

public string Name { get; set; }

public string Path { get; set; }

public string Id { get; set; }

private void InitProperties(string file, string root)
{
XDocument xDoc = XDocument.Load(file);
XNamespace ns = @"http://schemas.microsoft.com/developer/msbuild/2003";
XElement xElement = xDoc.Root.Elements(XName.Get("PropertyGroup", ns.NamespaceName)).First().Element(XName.Get("ProjectGuid", ns.NamespaceName));
if (xElement != null)
{
this.Id = xElement.Value;
}

this.Path = file.Substring(root.Length).TrimStart(new char[] { '\\' });

this.Name = System.IO.Path.GetFileNameWithoutExtension(file);
}
}
#>

A Simple Batch Script to Dump the Contents of the Folder and Its Subfolders Recursively

This topic might seem too minor for a blog post. You can argue that it’s covered by a simple call to a dir /s command. Well, that’s true unless you need to perform some actions with each line in the list. In this case it could be tricky if you do not use BATCH files on a daily basis.
Imagine you need to dump the file paths in a folder and its subfolders to a plain list. Besides, you’d like to replace the absolute path prefix with UNC share prefix, because each path contains a shared folder and each file will be accessible from inside the network. So, here goes the script:
@echo off
set _from=*repo
set _to=\\server\repo
FOR /F "tokens=*" %%G IN ('dir /s /b /a:-D /o:-D') DO (CALL :replace %%G) >> files.txt
GOTO :eof

:replace
set _str=%1
call set _result=%%_str:%_from%=%_to%%%
echo %_result%
GOTO :eof

Let’s start from the FOR loop. This version of the command loops through the output of another command, in this case, dir. Essentially, we ask dir to run recursively (/s), ignore directories (/a:-D), sort by date/time, newest first (/o:-D) and output just the basic information (/b). And the FOR command works on top of this, iterating all lines of dir output (tokens=*), calling a subroutine :replace for each line and streaming the final result into files.txt.

The subroutine does a very simple thing – it replaces one part of the string with another. Let’s step through it anyway. First, it gets the input parameter (%1) and saves it into _str variable. I suppose %1 could be used as is in the expression below, but the number of ‘%’ signs drives me crazy even without it. The next line is the most important – it does the actual replacement job. I’ll try to explain all these % signs: the variable inside the expression must be wrapped with % (like _from and _to); the expression itself should go between % and % as if it’s a variable itself. And the outermost pair of % is there for escaping purpose, I suppose – you will avoid it if you use just string literals for tokens in expression. Note also the usage of the CALL SET statement. Finally, the last line of the subroutine echoes the result.

There’s one last point worth attention. The _from variable, which represents the token to replace, contains a * sign. It means “replace ‘repo’ and everything before it” in the replace expression.

The best resource I found on the topic is http://ss64.com/nt/.