.NET, Sitecore and setup development

This blog is used as a memory dump of random thoughts and interesting facts about different things in the world of IT.

Use Cognitive Services in VSTS Custom Branch Policies

Imagine a situation when an international distributed team works on a project. People speak different languages and often tend to add comments or name things in their native language. What if we can configure a system which analyzes the pull request contents and prevents its completion unless it is in English? Fortunately, it is possible with Microsoft cognitive services API, Azure functions and custom branch policies in VSTS. Let’s walk through the process.

There is a detailed article on how to use Azure functions to create custom branch policies. I will use it as a starting point.

First of all, let’s remove some simplifications, like hard-coded VSTS PAT and later cognitive serivces API key. Those entities can be kept in the Azure key vault as secrets and safely addressed from the Azure function code.

Then, let’s replace the primitive sample “starts with [WIP]” check in that sample above with some more sophisticated verification. I’ll use Microsoft cognitive services to detect the language of the pull request title. In case the language confidence is higher than 70% I’ll assume the text is in English.

Finally, let’s post proper pull request status and configure branching policy out of that status, and see how the full solution works.

Keep code secrets in the Azure Key Vault service and access those from Azure Function

There are official docs about how to get started with the key vault service. We’ll need to create 2 secrets: one for VSTS personal access token (PAT) and another one for API key to be used when connecting to cognitive services. The “create a secret” section under Manage keys and secrets gives a step-by-step guideline on how to do this. Note that Secret Identifier field – it is required to get the secret value from inside the Azure function code.

Now we need to grant our Azure function permissions to read the secrets from the key vault. This great article contains detailed steps on how to achieve this. To tell it short, there are two major points:

Enable Managed Service Identiry for the Function App

As far as I understand, it will make the function app appear as an AD identity for Azure and it will be possible to grant permissions specifically to the function as if it is done for a normal user:

Add an access policy in the key vault for the Azure function

This will allow the function app to read the secrets in the key vault. The access policy contains a variaty of permissions, but for this sample only Get and List under Secret Management Operations are required:

Access key vault secrets from the Azure function

The API required for that reside in the following two NuGet packages, that should be added to the project.json file of the function:

Then the code itself it quite trivial:

1
2
3
4
5
6
7
8
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
// ...

var azureServiceTokenProvider = new AzureServiceTokenProvider();
var kvClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
string vstsPAT = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/vstsPAT/<GUID_HERE>")).Value;
string apiKey = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/CognitiveServicesAPIkey/<GUID_HERE>")).Value;

The <GUID_HERE> token above should be replaced with the real Secret Identifier of each secret.

Use Microsoft cognitive services to detect the language of the pull request title

Microsoft cognitive services is a bunch of AI-driven Azure services which can do much more than just text analytics. In this article we’ll just touch the surface with language part of it. I can higly recommend the Microsoft Cognitive Services: Text Analytics API course on Pluralsight by Matt Kruczek if you want to learn some more. In fact, I’ll use slightly modified example from that course in this article.

To begin with, a new resource should be instantiated in Azure: Text Analytics API. It is important to choose West US as a location here, no matter which one is closer to you. For some reason, the C# API we’ll work with (from Microsoft.ProjectOxford.Text NuGet package) addresses West US API endpoint. This StackOverflow answer helped to understand the root cause.

Once the resource is created, make sure to get the API key (KEY 1 on the image below) and place it to the key vault:

As I mentioned above, the Microsoft.ProjectOxford.Text NuGet package is to be used to talk to the language analytics service. Let’s add this NuGet package to the project.json of the function, too:

Finally, the code itself is placed in the private method, which is called from the main function each time we need to detect the language of the text:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
private static int GetEnglishConfidence(string text, string apiKey, TraceWriter log)
{
    var document = new Document()
    {
        Id = Guid.NewGuid().ToString(),
        Text = text,
    };

    var englishConfidence = 0;

    var client = new LanguageClient(apiKey);
    var request = new LanguageRequest();
    request.Documents.Add(document);

    try
    {
        var response = client.GetLanguages(request);

        var tryEnglish = response.Documents.First().DetectedLanguages.Where(l => l.Iso639Name == "en");

        if (tryEnglish.Any())
        {
            var english = tryEnglish.First();
            englishConfidence = (int) (english.Score * 100);
        }
    }
    catch (Exception ex)
    {
        log.Info(ex.ToString());
    }

    return englishConfidence;
}

Note that it is all about sending proper request, and then finding out the language score of the detected languages in the response.

Post pull request status back to VSTS pull request

The original guideline I referenced at the beginning of this article contains the code of one other helper method we are going to change: ComputeStatus. Our version of this method will make a call to GetEnglishConfidence method listed above and form proper JSON to post back to VSTS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
private static string ComputeStatus(string pullRequestTitle, string apiKey, TraceWriter log)
{
    string state = "failed";
    string description = "The PR title is not in English";

    if (GetEnglishConfidence(pullRequestTitle, apiKey, log) >= 70)
    {
        state = "succeeded";
        description = "The PR title is in English! Please, proceed!";
    }

    return JsonConvert.SerializeObject(
        new
        {
            State = state,
            Description = description,

            Context = new
            {
                Name = "AIforCI",
                Genre = "pr-azure-function-ci"
            }
        });
}

Besides, as long as the call to the language analytics service might take some time, we need a method to post initial Pending status:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
private static string ComputeInitialStatus()
{
    string state = "pending";
    string description = "Verifying title language";

    return JsonConvert.SerializeObject(
        new
        {
            State = state,
            Description = description,

            Context = new
            {
                Name = "AIforCI",
                Genre = "pr-azure-function-ci"
            }
        });
}

As a result, the most interesting part of the Azure function itself will look like this:

1
2
3
4
// Post the initial status (pending) while the true one is calculated
PostStatusOnPullRequest(pullRequestId, ComputeInitialStatus(), vstsPAT);
// Post the real status based on the language analysis
PostStatusOnPullRequest(pullRequestId, ComputeStatus(pullRequestTitle, apiKey, log), vstsPAT);

Demo time: let’s tie it all together

I’ll assume that all the steps described in the original guideline about Azure functions and pull requests) are completed properly. As a result, VSTS knows how to trigger our Azure function on pull request create and update events.

Let’s create the first pull request and let it be titled in pure English:

As soon as the title is verified against the language analytics service, the status changes:

Now, if we try to modify the title to some Russian text, the status changes accordingly:

Finally, we can make a policy out of the pull request status, and decide whether to block the PR completion based on the language verification result:

Conclusion

The combination of custom branch policies in VSTS and the power of Azure functions might result in very flexible solutions, limited only by your imagination. Give it a try and tweak your gated CI to comply with your needs.

Here is the full source code of the solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
#r "Newtonsoft.Json"

using System;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using Newtonsoft.Json;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.ProjectOxford.Text.Core;
using Microsoft.ProjectOxford.Text.Language;

private static string accountName = "[Account Name]";   // Account name
private static string projectName = "[Project Name]";   // Project name
private static string repositoryName = "[Repo Name]";   // Repository name

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    try
    {
        log.Info("Service Hook Received.");

        // Get secrets from key vault
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var kvClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        string vstsPAT = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/vstsPAT/<GUID_HERE>")).Value;
        string apiKey = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/CognitiveServicesAPIkey/<GUID_HERE>")).Value;

        // Get request body
        dynamic data = await req.Content.ReadAsAsync<object>();

        log.Info("Data Received: " + data.ToString());

        // Get the pull request object from the service hooks payload
        dynamic jObject = JsonConvert.DeserializeObject(data.ToString());

        // Get the pull request id
        int pullRequestId;
        if (!Int32.TryParse(jObject.resource.pullRequestId.ToString(), out pullRequestId))
        {
            log.Info("Failed to parse the pull request id from the service hooks payload.");
        };

        // Get the pull request title
        string pullRequestTitle = jObject.resource.title;

        log.Info("Service Hook Received for PR: " + pullRequestId + " " + pullRequestTitle);

        // Post the initial status (pending) while the true one is calculated
        PostStatusOnPullRequest(pullRequestId, ComputeInitialStatus(), vstsPAT);
        // Post the real status based on the language analysis
        PostStatusOnPullRequest(pullRequestId, ComputeStatus(pullRequestTitle, apiKey, log), vstsPAT);

        return req.CreateResponse(HttpStatusCode.OK);
    }
    catch (Exception ex)
    {
        log.Info(ex.ToString());
        return req.CreateResponse(HttpStatusCode.InternalServerError);
    }
}

private static void PostStatusOnPullRequest(int pullRequestId, string status, string pat)
{
    string Url = string.Format(
        @"https://{0}.visualstudio.com/{1}/_apis/git/repositories/{2}/pullrequests/{3}/statuses?api-version=4.0-preview",
        accountName,
        projectName,
        repositoryName,
        pullRequestId);

    using (HttpClient client = new HttpClient())
    {
        client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
        client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(
                ASCIIEncoding.ASCII.GetBytes(
                string.Format("{0}:{1}", "", pat))));

        var method = new HttpMethod("POST");
        var request = new HttpRequestMessage(method, Url)
        {
            Content = new StringContent(status, Encoding.UTF8, "application/json")
        };

        using (HttpResponseMessage response = client.SendAsync(request).Result)
        {
            response.EnsureSuccessStatusCode();
        }
    }
}

private static int GetEnglishConfidence(string text, string apiKey, TraceWriter log)
{
    var document = new Document()
    {
        Id = Guid.NewGuid().ToString(),
        Text = text,
    };

    var englishConfidence = 0;

    var client = new LanguageClient(apiKey);
    var request = new LanguageRequest();
    request.Documents.Add(document);

    try
    {
        var response = client.GetLanguages(request);

        var tryEnglish = response.Documents.First().DetectedLanguages.Where(l => l.Iso639Name == "en");

        if (tryEnglish.Any())
        {
            var english = tryEnglish.First();
            englishConfidence = (int) (english.Score * 100);
        }
    }
    catch (Exception ex)
    {
        log.Info(ex.ToString());
    }

    return englishConfidence;
}


private static string ComputeInitialStatus()
{
    string state = "pending";
    string description = "Verifying title language";

    return JsonConvert.SerializeObject(
        new
        {
            State = state,
            Description = description,

            Context = new
            {
                Name = "AIforCI",
                Genre = "pr-azure-function-ci"
            }
        });
}

private static string ComputeStatus(string pullRequestTitle, string apiKey, TraceWriter log)
{
    string state = "failed";
    string description = "The PR title is not in English";

    if (GetEnglishConfidence(pullRequestTitle, apiKey, log) >= 70)
    {
        state = "succeeded";
        description = "The PR title is in English! Please, proceed!";
    }

    return JsonConvert.SerializeObject(
        new
        {
            State = state,
            Description = description,

            Context = new
            {
                Name = "AIforCI",
                Genre = "pr-azure-function-ci"
            }
        });
}

Transform Trello List Into Markdown File

I really enjoy reading. And I think I read a lot. I’m sure there are people who read much more, but… well, that’s not what I was going to tell you.

One day I realized that some book recommendations I come across got lost in my memory, so I started a special Trello board. Basically, when I get a book recommendation from a person I respect, I add a new card to the initial To Read list of that board. When a certain book is read, I write a short review into the Description of the appropriate card and move it to final Done list.

Thus, the Done list gets populated with (quite subjective) book reviews. I thought it might be a good idea to post those reviews as a separate article. One day.

As long as compiling long lists out of dozens small snippets is a boring task, it turns out to be a good occasion to play with the Trello API.

So, the idea is to have a PowerShell script which will iterate over the cards in the list and generate a nice Markdown file.

Prerequisites: app key and authorization

First of all, you should acquire the app key – the entity required for all subsequent operations. Simply head over to https://trello.com/app-key to get this API key. Let’s put it into the variable.

1
$apiKey = "LongSequenceOfCharsWhichIsBasicallyAnApiKey"

Real-world applications will need to ask each user to authorize the application, but as long as we are just playing with it locally, let’s manually generate a token. Open this URL in your browser:

1
https://trello.com/1/authorize?expiration=never&scope=read&response_type=token&name=Server%20Token&key=<PASTE_ABOVE_KEY_HERE>

Note that we specify the expiration term (never) and the token scope (read only) here. Copy the token when it appears on the page and save into another variable.

1
$token = "EvenLongerSequenceOfCharsWhichIsGuessWhatRightTheToken"

We’ll need both values for any REST API request, so let’s make our life easier:

1
$authSuffix = "key=$apiKey&token=$token"

Get the Board ID to work with

Okay, the preparations are over, and it’s time to get the ID of the board we’ll work with.

1
2
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/members/me?boards=open&board_fields=name&$authSuffix" -Method Get
$boardId = $response.Boards | where name -EQ "Reading" | select -ExpandProperty id

The URL instructs the API to get all my open boards, which is then piped through where filter to get the one called “Reading”.

Get the Labels used on the Board

When I finish a book, I mark it with one of the following labels:

  • Green: Recommended
  • Blue: Indifferent
  • Yellow: OK for one-time reading
  • Red: Waste of time

We’ll format the subheaders of each book review depending on the label it has. For instance, recommended books will be highlighted in bold, while clearly not recommended will be struck though. Let’s get the list of those labels:

1
2
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?labels=all&label_fields=color&$authSuffix" -Method Get
$labels = $response.Labels | Convert-ArrayToHashTable

As a result, we have a hashtable – LabelId : LabelColor.

Get the List ID containing the Cards

Trello cards live inside the lists, so let’s get the Done list. As long as I know that the necessary list is the last one, it’s a bit easier scripted:

1
2
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?lists=open&list_fields=id,name&$authSuffix" -Method Get
$listId = $response.Lists | select -Last 1 -ExpandProperty id

Get the Cards from the List

Now, when we have the list ID, we are just one call away from getting the collection of cards. Note that we only get necessary fields – in this case Title (subheader), Label (subheader formatting) and Description (actual text of the review).

1
2
3
4
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/lists/$($listId)?cards=all&card_fields=name,desc,idLabels&$authSuffix" -Method Get
$cards = $response.Cards | select @{Name="Title";       Expression={$_.name}},
                                  @{Name="Label";       Expression={$labels[$_.idLabels[0]]}},
                                  @{Name="Description"; Expression={$_.desc}}

And a bit of formatting magic for dessert

Finally, the collection of cards is transformed to become a nicely formatted Markdown document:

1
Convert-TableToMarkdown $cards

Run the script, and you’ll get a markdown document, similar to this one:

The full listing of the PowerShell script

Here is the script I ended up with, including some under-the-hood formatting magic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Function Convert-ArrayToHashTable
{
    begin { $hash = @{} }
    process { $hash[$_.id] = $_.color }
    end { return $hash }
}

function Get-WrapperByLabel
(
  [Parameter(Mandatory=$true)] $label
)
{
  switch ($label) {
      "red"    { "~~" }
      "green"  { "**" }
      "blue"   { "*" }
      Default { "" }
  }
}

Function Convert-TableToMarkdown
(
  [Parameter(Mandatory=$true)] $books
)
{
  $filePath = "$PSScriptRoot\result.md"
  foreach ($book in $books) {
    $wrapper = Get-WrapperByLabel $book.Label
    "## $wrapper$($book.Title)$wrapper" | Out-File -FilePath $filePath -Encoding unicode -Append
    "" | Out-File -FilePath $filePath -Encoding unicode -Append
    "$($book.Description)" | Out-File -FilePath $filePath -Encoding unicode -Append
  }
}

# Head over to https://trello.com/app-key to get this API key
$apiKey = "LongSequenceOfCharsWhichIsBasicallyAnApiKey"

# Use this shortcut for local test code: https://trello.com/1/authorize?expiration=never&scope=read&response_type=token&name=Server%20Token&key=<PASTE_ABOVE_KEY_HERE>
$token = "EvenLongerSequenceOfCharsWhichIsGuessWhatRightTheToken"

# shape the authentication suffix to append to each URI in REST calls
$authSuffix = "key=$apiKey&token=$token"

# get the ID of the target board
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/members/me?boards=open&board_fields=name&$authSuffix" -Method Get
$boardId = $response.Boards | where name -EQ "Reading" | select -ExpandProperty id

# get the labels used on the board
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?labels=all&label_fields=color&$authSuffix" -Method Get
$labels = $response.Labels | Convert-ArrayToHashTable

# get the last list (which contains the items I'm done reading)
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?lists=open&list_fields=id,name&$authSuffix" -Method Get
$listId = $response.Lists | select -Last 1 -ExpandProperty id

# get the list of cards (necessary fields only)
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/lists/$($listId)?cards=all&card_fields=name,desc,idLabels&$authSuffix" -Method Get
$cards = $response.Cards | select @{Name="Title";       Expression={$_.name}},
                                  @{Name="Label";       Expression={$labels[$_.idLabels[0]]}},
                                  @{Name="Description"; Expression={$_.desc}}

Convert-TableToMarkdown $cards

P.S. The Trello REST API is clear, concise and well-documented here. Great job!

VSTS and TeamCity Commit Status Publisher

Some time ago VSTS team added a feature called Pull Request Status Extensibility. It unlocked the door for the external services to post custom statuses to the pull requests created in Git repositories hosted in VSTS. Once the status is posted, it is possible to make a branching policy out of it, and this fact makes it a powerful feature.

According to the VSTS Feature Timeline Pull Request Status Extensibility will arrive in On-Prem TFS 2018 RC1 and future.

Fortunately, TeamCity has just added the option to send pull request statuses to its Commit Status Publisher in the most recent build of the version 2017.2.

At the moment of writing this post, the version 2017.2 is still an EAP, and I’ll use 2017.2 EAP4 as the first build the feature has arrived with.

These two pieces assemble in a nice picture where you can host your project in VSTS while keeping the build part entirely in TeamCity. In this post, I’ll guide you through the steps required to configure this beautiful setup.

TeamCity: basic setup of the build project

To begin with, we’ll add a connection to VSTS in TeamCity. It is not required, but helps a lot in the further configuration of VCS root and build features. Navigate to Administration > <Root Project> > Connections and click “Add Connection” button:

Now, let’s create a new build project. Thanks to the connection configured prior to this step, the VCS root configuration is as easy as clicking a Visual Studio icon:

Choose the repository we’d like to target and TeamCity will form the proper clone URL. Note that Branch Specification field is set to watch pull requests too.

For the sake of this demo the build project itself is quite simple: it contains just one build configuration, which in its turn consists of a single PowerShell build step faking the real build process by several seconds sleep. There’s also a VCS trigger to run the build on the changes in default branch (+:<default>) as well as pull request merges (+:pull/*/merge).

Finally, we should configure the Commit Status Publisher, which does all the magic. Switch to the Build Feature on the left pane, and click “Add Build Feature” button:

Note the checkbox that hides in the Advanced Options view. It should be turned on in order to enable pull request status publishing.

Ideally, you should generate another personal access token in VSTS with only Code (status) and Code (read) scopes specified. However, being lazy, I’ve just clicked the magic wand icon and TeamCity pulled the all-scopes access token from the connection.

VSTS: creating a pull request with status from TeamCity

Now, when we’re done with TeamCity configuration, let’s go ahead and create a pull request in out VSTS Git repository. When TeamCity detects the change, it starts building the pull request. At the same time, the pull request view in VSTS displays appropriate status:

Once the build has completed, the status is refreshed:

If you click the link, it navigates to the completed build page in TeamCity:

VSTS: Make branch policy out of the TeamCity build status

As long as the external service has published its status to the pull request once, it is possible to configure it to serve as a branch policy for this and all other pull requests in this repository. Let’s do this now.

Navigate to the branch policies of the master branch and click “Add Service” in “Require approval from external services” section:

Choose the target service from the dropdown (its name is combined of TeamCity build project and configuration) and modify other options according to your needs. Note that it is possible to configure the service the way it behaves as a normal branch policy. For example, the status can be required and will expire when the source branch gets an update:

Finally, click Save and push some other change to the existing pull request. As soon as the pull request is updated, the Status section disappears and a new policy is displayed. It stays in the waiting mode until the TeamCity build is started:

Once the build is started, the policy status changes to Pending:

Finally, when the build is done, it is also reflected on the custom policy status:

Similar to the pull request status behavior, it is possible to click the link and navigate to the build view in TeamCity.

TeamCity: build normal branches and post the status back to VSTS

When we merge the pull request, the build of the master branch is triggered in TeamCity. If you switch to the Branches view in VSTS, you can see the In Progress type of icon in the Build column of the master branch:

Once the build is completed, the icon changes to the appropriate state (Success in our case):

Conclusion

In this article, we’ve quickly run through the steps required to configure close integration between VSTS Git repository and TeamCity build project. Note that I haven’t written a single line of code for this to happen. This setup might be useful for those projects that have extensive build configuration in TeamCity, but would like to benefit from the fantastic pull request user experience in VSTS.

ERR_CONNECTION_REFUSED

Today I have faced with the problem installing Basic Authentication feature into the Web Server role on Windows 2012 R2. The wizard kept throwing various errors, including scary OutOfMemoryException. A quick googling that has found a suggestion to run netsh http show iplisten and add 127.0.0.1 (aka Home Sweet Home) to the list if it’s not there. I gave it a try without giving it a thought first.

The initial problem has not been solved – the wizard kept failing to add that feature, and I finally resolved it with the mighty PowerShell:

1
2
Import-Module ServerManager
Add-WindowsFeature Web-Basic-Auth

Later on I had to browse for a website hosted on that server, and I suddenly saw This webpage is not available message. Hmm… First off, I’ve verified that the website works locally – and it did. This gave me another hint and I checked whether the bindings are set up correctly. And they were! Finally, I started to think that it’s Basic Authentication feature to blame – yeah, I know, that was a stupid assumption, but hey, stupid assumptions feel very comfortable for brain when it faces with the magic…

Anyway, fortunately I recalled that quick dumb action I did with netsh, and the magic has immediately turned into the dust, revealing someone’s ignorance… Turns out, if iplisten does not list anything, it means listen to everything, any IP address. When you add something there, it starts listening to that IP address only.

Thus, it was all resolved by deleting 127.0.0.1 from that list with netsh http delete iplisten ipaddress=127.0.0.1.

Want some quick conclusion? Think first, then act!!!

Written with StackEdit.

Build Queue Starvation in CC.NET

Recently I’ve come across an interesting behavior in CruiseControl.NET in regards to the build queues and priorities.

If there are many projects on one server, and the server it not quite powerful, and more than one build queue is configured, and (that’s the last one) these build queues have different priorities, you might end up in a situation when CC.NET checks for modifications the same set of projects all over again, and never starts an actual build. If you add the projects from that server to the CCTray, you can observe the number of projects queued for the build has reached a certain number and never decreases.

This phenomenon is call “build queue starvation”. It was described and explained by Damir Arh in his blog.

Let me summarize the main idea.

When one build queue has a higher priority than another queue, CC.NET favors the projects from the first queue when scheduling for modifications check. Now imagine that a trigger of the projects from the higher priority queue is quite small and the number of such projects is big enough. This leads to the situation when the first project in high priority queue is scheduled for build the second round before the last project in that queue has built its first time.

As a result, the lower priority queue is “starving” – none of its projects ever gets a chance to be built. The fix suggested in the link above suits my needs – the trigger interval has just been increased.

I should say it’s not easy to google that if you’re not familiar with the term “build queue starvation”. Besides, CC.NET doesn’t feel bad in this situation, and hence doesn’t help with any warnings – it just does its job iterating the queue and following the instructions.

Written with StackEdit.

Setting Up an Existing Blog on Octopress

Ok, it took me some time and efforts to set up the environment for blogging. Consider this post as a quick instruction to myself for the next time I’ll have to do this.

So, there’s an existing blog created with Octopress, hosted on Github. The task is to setup a brand new machine to enable smooth blogging experience.

Note: just in case you have to create a blog from scratch, follow the official Octopress docs, it’s quite clear.

First of all, you should install Ruby. Octopress docs recommend using either rbenv or RVM for this. Both words sound scary, hence don’t hesitate to take the easy path and download an installer from here. At the last page of the installation wizard, choose to add Ruby binaries to the PATH:

When installer completes, check the installed version:

ruby --version

Then, clone the repo with the blog from Github. Instead of calling rake setup_github_pages as suggested by the Octopress docs, follow these steps found here. Let’s assume we’ve done that into blog folder:

git clone git@github.com:username/username.github.com.git blog
cd blog
git checkout source
mkdir _deploy
cd _deploy
git init
git remote add origin git@github.com:username/username.github.com.git
git pull origin master
cd ..

Now do the following:

gem install bundler
bundle install

This should pull all the dependencies required for the Octopress engine. Here’s where I faced with the first inconsistency in the docs – one of the dependencies (fast-stemmer) fails to install without the DevKit. Download it and run the installer. The installation process is documented here, but the quickest way is:

  • self-extract the archive
  • cd to that folder
  • run ruby dk.rb init
  • then run ruby dk.rb install

After this, re-run the bundle install command.

Well, at this point you should be able to create new posts with rake new_post[title] command. Generate the resulting HTML with rake generate and preview it with rake preview to make sure it produces what you expect.

An important note about syntax highlighting

Octopress uses Pygments to highlight the code. This is a Python thing, and obviously you should install Python for this to work. Choose 2.x version of Python – the 3.x version doesn’t work. This is important: you won’t be able to generate HTML from MARKDOWN otherwise.

That’s it! Hope this will save me some time in future.

And by the way, this all is written with StackEdit – a highly recommended online markdown editor.

Migrate Attachments From OnTime to TFS

When you move from one bug tracking system to another, the accuracy of the process is very important. A single missing point can make a work item useless. An attached image is often worth a thousand words. Hence, today’s post is about migrating attachments from OnTime to TFS.

NOTE: The samples in this post rely on OnTime SDK, which was replaced by a brand new REST API.

OnTime SDK is a set of web services, and each “area” is usually covered by one or a number of web services. The operations with attachments are grouped in /sdk/AttachmentService.asmx web service.

So, the first thing to do is to grab all attachments of the OnTime defect:

1
var rawAttachments = _attachmentService.GetAttachmentsList(securityToken, AttachmentSourceTypes.Defect, defect.DefectId);

This method returns a DataSet, and you’ll have to enumerate its rows to grab the useful data:

1
2
3
4
5
6
var attachments = rawAttachments.Tables[0].AsEnumerable();
foreach (var attachment in attachments)
{
  // wi is a TFS work item object
  wi.Attachments.Add(GetAttachment(attachment));
}

Now, let’s take a look at the GetAttachment method, which actually does the job. It accepts the DataRow, and returns the TFS Attachment object:

1
2
3
4
5
6
7
8
9
10
11
private Attachment GetAttachment(DataRow attachmentRow)
{
  var onTimeAttachment = _attachmentService.GetByAttachmentId(securityToken, (int)attachmentRow["AttachmentId"]);

  var tempFile = Path.Combine(Path.GetTempPath(), onTimeAttachment.FileName);
  if (File.Exists(tempFile))
    File.Delete(tempFile);
  File.WriteAllBytes(tempFile, onTimeAttachment.FileData);

  return new Attachment(tempFile, onTimeAttachment.Description);
}

Couple of things to notice here:

  • you have to call another web method to pull binary data of the attachment
  • OnTime attachment metadata is rather useful and can be moved as is to TFS, for instance, attachment description

Finally, when a new attachment is added to the TFS work item, “increment” the ChangedDate of the work item before saving it. The TFS server often refuses saving work item data in case the previous revision has exactly the same date/time stamp. Like this (always works):

1
2
wi[CoreField.ChangedDate] = wi.ChangedDate.AddSeconds(5);
wi.Save();

Hope it’s useful. Good luck!

NAnt Task Behaves Differently in 0.92 and Prior Versions

If you need to copy a folder together with all its contents to another folder in NAnt, you would typically write something like this:
<copy todir="${target}">
<fileset basedir="${source}" />
</copy>
It turns out this code works correctly in NAnt 0.92 Alpha and above. The output is expected:
[copy] Copying 1 directory to '...'.
However, the same code doesn’t work in prior versions of NAnt, for instance, 0.91. The output is as follows (only in –debug+ mode):
[copy] Copying 0 files to '...'.
Obviously, the issue was fixed in 0.92, so the best recommendation would be to upgrade NAnt toolkit. However, if this is not an option for some reason, the following code seems to work correctly for any version:
<copy todir="${target}">
<fileset basedir="${source}">
<include name="**/*" />
</fileset>
</copy>
Hope this saves you some time.

Possible Source of the Signtool ‘Bad Format’ 0x800700C1 Problem

Today I have faced with a weird problem. The operation to sign the EXE file (actually, an installation package) with a valid certificate failed with the following error:
[exec] SignTool Error: SignedCode::Sign returned error: 0x800700C1
[exec] Either the file being signed or one of the DLL specified by /j switch is not a valid Win32 application.
[exec] SignTool Error: An error occurred while attempting to sign: D:\output\setup.exe
This kind of error is usually an indication of a format incompatibility, when the bitness of the signtool.exe and the bitness of the EXE in question don’t correspond. However, this was not the case.

It turns out that the original EXE file was generated incorrectly because of the lack of disk space. That’s why it was broken and was recognized by the signtool like a bad format file. After disk cleanup everything worked perfectly and the EXE file was signed correctly.

Hope this saves someone some time.

A Solution Can Build Fine From Inside the Visual Studio, but Fail to Build With msbuild.exe

Today I have faced with an interesting issue. Although I failed to reproduce it on a fresh new project, I think this info might be useful for others.
I have a solution which was upgraded from targeting .NET Framework 2.0 to .NET Framework 3.5. I’ve got a patch from a fellow developer to apply to one of the projects of that solution. The patch adds new files as well as modifies existing ones. After the patch application, the solution is successfully built from inside the Visual Studio, but fails to build from the command line with msbuild.exe. The error thrown states that
“The type or namespace name 'Linq' does not exist in the namespace 'System' ”. 
The msbuild version is 3.5:
[exec] Microsoft (R) Build Engine Version 3.5.30729.5420
[exec] [Microsoft .NET Framework, Version 2.0.50727.5456]
[exec] Copyright (C) Microsoft Corporation 2007. All rights reserved.
It turns out this issue has been met by other people, and even reported to Microsoft. Microsoft suggested to use MSBuild.exe 4.0 to build VS 2010 projects. However, they confirmed it is possible to use MSBuild.exe 3.5  - in this case a reference to System.Core (3.5.0.0) must be explicitly added to the csproj file.
If you try to add a reference to System.Core from inside the Visual Studio, you’ll get the error saying:
"A reference to 'System.Core' could not be added. This component is already automatically referenced by the build system"
So, it seems that when you build a solution from inside the Visual Studio, it is capable to automatically load implicitly referenced assemblies. I suppose, MSBuild.exe 4.0 (and even SP1-patched MSBuild.exe 3.5?) can do this as well. Apparently, this has also turned out to be a known problem – you can’t add that reference from the IDE. Open csproj file in your favorite editor and add this:
<Reference Include="System.Core" />
After this, the project builds fine in both VS and MSBuild.