http://sklyarenko.net/Home20192019-10-17T19:06:28ZThe technical blog of Yan Sklyarenkohttp://sklyarenko.net/posts/visualize-git-health-checks-in-pull-requestVisualize Git health checks in a pull request2019-09-25T15:56:00Z<p>Every team of developers eventually creates a set of practices to follow regarding the repository maintenance. This includes the commit message rules, branch naming, merge strategies, etc. Some of those practices can be configured as IDE settings, for instance, automatic removal of the trailing spaces. Others, such as rebasing vs. merging, rely on the discipline and the culture in the team.</p>
<blockquote class="blockquote">
<p>Perhaps, the example with merging strategies is not very accurate, since <a href="https://devblogs.microsoft.com/devops/pull-requests-with-rebase/">appropriate branching policies</a> have arrived to Azure Devops.</p>
</blockquote>
<p>However, whenever the human factor is involved, there's always a source of unintentional errors. It would be great to automate some of the most important checks and run them in scope of the pull request verification. It can be done with a simple PowerShell script, which is triggered by a normal build definition. However, as soon as the number of checks grows, it becomes more difficult to understand which of those are violated.</p>
<p>For one of our projects, we've come up with a small enhancement to visualize the checks and make it easier to fix violations. The idea is to send a pull request status per each check back to the parent pull request. In case of at least one failing check the entire build should fail. There's still an option to browse for the build log and dig the actual error out. Besides, there's a handy list of statuses, visualizing the situation and helping to narrow the problem down faster.</p>
<p>I'll describe the basic process points on a <code>fixup!</code> and <code>squash!</code> comments presence verification.</p>
<p>The <a href="https://dev.to/koffeinfrei/the-git-fixup-workflow-386d">Git fixup flow</a> implies that you add fixup commits and eventually run interactive rebase with <code>--autosquash</code> flag. The latter action marks the relevant commits for fixup. If you forget to add the <code>--autosquash</code> switch, you might end up with the <code>fixup!</code> commits as is in the Git history.</p>
<p>Here is the PowerShell code to verify the presence of <code>fixup!</code> or <code>squash!</code> commits in the range of commits brought by the pull request:</p>
<pre><code class="language-PowerShell">$fixupStatus = "succeeded"
"fixup!", "squash!" | Foreach-Object {
$fixups = git log $sourceCommit $targetCommit --grep="$_" --oneline
if ($fixups.Count -ne 0) {
Print-Error("'$_' commits detected!:`n $([system.String]::Join(`"`n `", $fixups))")
$fixupStatus = "failed"
$exitCode += 1
}
}
Post-PullRequestStatus -name "git-fixup-status" -description "Git Fixup Check" -state $fixupStatus
</code></pre>
<p>Let's step through the key points here:</p>
<ul>
<li>The <code>$sourceCommit</code> and <code>$targetCommit</code> point to the HEAD of source branch and the HEAD of target branch, respectively. They are calculated based on the incoming information of the pull request - the source branch and the target branch - using the Git commands</li>
<li>The <code>$fixupStatus</code> is the actual state (<code>succeeded</code> or <code>failed</code>) of the pull request status to post back</li>
<li>The <code>$exitCode</code> is incremented to indicate the error in case of the check violation, but do not stop the script execution until all other checks are verified</li>
<li>The <code>Post-PullRequestStatus</code> is a helper function to post the pull request status with the Azure DevOps REST API. It needs the name to uniquely identify the status, description for visualization and the actual state calculated earlier</li>
</ul>
<p>This is how it's implemented:</p>
<pre><code class="language-PowerShell">function Post-PullRequestStatus {
param (
[string]$name,
[string]$description,
[string]$state,
[string]$genre = "git-health-check"
)
$body = @{
state = $state
description = $description
context = @{
name = $name
genre = $genre
}
} | ConvertTo-Json
Send-Request -method "POST" -Body $body | Out-Null
}
</code></pre>
<p>The <code>$genre</code> parameter is used to group the statuses of the similar nature. As long as our verifications are all about Git stuff, it defaults to <code>git-health-check</code> in the code above.</p>
<blockquote class="blockquote">
<p>NOTE: The <code>Send-Request</code> is a helper function to wrap the details of the Azure DevOps REST API call. I won't go into the details since it's out of scope of this article.</p>
</blockquote>
<p>Now, as soon as the pull request is created or updated, the build definition we called <code>Git Health Check Status</code> is triggered and, among other stuff, checks for the fixup and squash commits. In case the rule is violated, the <code>failed</code> status is sent back to the pull request and the entire build fails:</p>
<p><img src="./images/september2019/prStatusFailed.png" class="img-fluid" alt="Pull Request Failed Status" title="Pull Request Failed Status" /></p>
<p>The details can be found in the normal build log of the <code>Git Health Check Status</code> build:</p>
<p><img src="./images/september2019/gitHealthCheckBuildLog.png" class="img-fluid" alt="Build Log Details" /></p>
<p>When the problem is fixed (that is, the branch is interactively <em>rebased</em> and <em>autosquashed</em>), the Health Check is green:</p>
<p><img src="./images/september2019/prStatusSucceeded.png" class="img-fluid" alt="Pull Request Status Succeeded" /></p>
<p>The pull request status can be prettified even further if the <code>targetUrl</code> attribute of the <code>body</code> object is supplied (see <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/git/pull%20request%20statuses/create?view=vsts-rest-tfs-4.1#on-iteration">the API specs</a> for more details). For instance, the link could point to the docs explaining this particular check in detail: why keeping the <code>fixup!</code> commits out of the Git history, what Git commands one should run to resolve it, etc.</p>
<p>Now, with this setup in place, it is quite easy to add more checks. All you have to do is to add appropriate PowerShell snippet to perform the check and send the correct status back to the pull request.</p>
<p>Every team of developers eventually creates a set of practices to follow regarding the repository maintenance. This includes the commit message rules, branch naming, merge strategies, etc. Some of those practices can be configured as IDE settings, for instance, automatic removal of the trailing spaces. Others, such as rebasing vs. merging, rely on the discipline and the culture in the team.</p>http://sklyarenko.net/posts/devil-in-the-detailsTalk to Netlify REST API from Cake build script running on Azure Pipelines2019-09-24T11:00:00Z<p>I've been migrating this blog from OctoPress to Wyam the other day. In general, it's a pretty straight-forward process, but I faced with the situation when a combination of little things led to a strange behavior I had to fight with for almost a day. As usual, the devil is in the details, so here is a quick summary about what happened and what I ended up with.</p>
<p><a href="https://wyam.io/">Wyam</a> is a static content generator, which is very easy to start with, highly configurable and extensible. The documentation recommends <a href="https://wyam.io/docs/deployment">several platforms to deploy</a> the final result to, including <a href="https://www.netlify.com/">Netlify</a>, which is very friendly to lightweight static sites.</p>
<p>The deployment to Netlify can be done in a number of ways, described in <a href="https://wyam.io/docs/deployment/netlify">Wyam docs</a>. One of those ways is to use <a href="https://netlifysharp.netlify.com">NetlifySharp</a>, a .NET client for the Netlify REST API, also implemented by <a href="https://daveaglick.com">Dave Glick</a>, the father of Wyam. Fortunately, NetlifySharp provides a handy <a href="https://cakebuild.net/">Cake</a> plugin to make the goal even easier to achieve.</p>
<p>The sample, provided by the <a href="https://wyam.io/docs/deployment/netlify#use-netlifysharp">official documentation</a>, can be taken literally as is:</p>
<pre><code class="language-csharp">Task("Netlify")
.Does(() =>
{
var netlifyToken = EnvironmentVariable("NETLIFY_TOKEN");
if(string.IsNullOrEmpty(netlifyToken))
{
throw new Exception("Could not get Netlify token environment variable");
}
// Initialize the Netlify client and then issue a REST API call to update the site
Information("Deploying output to Netlify");
var client = new NetlifyClient(netlifyToken);
client.UpdateSite($"mysite.netlify.com", MakeAbsolute(Directory("./output")).FullPath).SendAsync().Wait();
});
</code></pre>
<p>As long as Netlify's own continuous deployment options do not include Cake build system at the moment of writing this, it makes sense to delegate this work to some other CI/CD platforms out there, for example, <a href="https://azure.microsoft.com/ru-ru/services/devops/pipelines">Azure Pipelines</a>. It supports keeping the pipeline configuration in YAML, and thus the entire configuration falls down to just a couple of lines (stolen from <a href="https://github.com/daveaglick/daveaglick/blob/master/azure-pipelines.yml">Dave's own blog</a>):</p>
<pre><code class="language-yaml">trigger:
- master
steps:
- script: build -target BuildServer
env:
NETLIFY_TOKEN: $(NETLIFY_TOKEN)
</code></pre>
<p>Okay, that's enough of a prelude, we are approaching the point of where the issue popped up. Take a closer look at the last line of the code above: <code>NETLIFY_TOKEN: $(NETLIFY_TOKEN)</code>. Basically, it's saying:</p>
<blockquote class="blockquote">
<p><em>Take a variable called <code>NETLIFY_TOKEN</code> defined in the pipeline, and pass it to the <code>script</code> build step as a value of the <code>NETLIFY_TOKEN</code> environment variable</em>.</p>
</blockquote>
<p>This implies that variable <code>NETLIFY_TOKEN</code> must be defined first. If you forget to do this, or make a mistake in the variable name, or provide an invalid token, you won't get any warning from Azure Pipelines, which is obviously correct behavior - the platform knows nothing about your intentions. In my case the variable was not specified, and the above YAML configuration set the environment variable to the value of the string literal <code>$(NETLIFY_TOKEN)</code>.</p>
<p>Sadly (and this is NOT expected), Netlify REST API won't warn you, either, in case an invalid token is provided. It will <em>swallow</em> the invalid token and do nothing. Here's a build log snippet:</p>
<p><img src="./images/september2019/buildLog01.png" class="img-fluid" alt="Netlify target did nothing" title="Netlify target did nothing" /></p>
<p>The only suspicious thing here is a very small build time of the <code>Netlify</code> target. Taking into account it should send some content over HTTP, it should have taken longer.</p>
<p>Even though the invalid token was provided, Netlify responded with status code <code>200</code>, just with a different set of response headers:</p>
<p><img src="./images/september2019/buildLog02.png" class="img-fluid" alt="Netlify response to invalid token" title="Netlify response to invalid token" /></p>
<p>The best quick fix I came up with was to check response headers explicitly and fail the build in case the <code>Connection</code> header contains <code>close</code> value. The modified <code>Netlify</code> target in the Cake build script looks like this:</p>
<pre><code class="language-csharp">Task("Netlify")
.Does(() =>
{
var netlifyToken = EnvironmentVariable("NETLIFY_TOKEN");
if(string.IsNullOrEmpty(netlifyToken))
{
throw new Exception("Could not get Netlify token environment variable");
}
Information("Deploying output to Netlify");
var client = new NetlifyClient(netlifyToken);
client.ResponseHandler = x =>
{
if (x.Headers.Connection != null && x.Headers.Connection.Contains("close"))
{
throw new Exception("Most likely invalid Netlify token was supplied");
}
};
client.UpdateSite($"mysite.netlify.com", MakeAbsolute(Directory("./output")).FullPath).SendAsync().Wait();
});
</code></pre>
<p>Now, the situation described above will fail the build and point out a potential build problem:</p>
<p><img src="./images/september2019/buildLog03.png" class="img-fluid" alt="Invalid Netlify token" title="Invalid Netlify token" /></p>
<p>When the problem is fixed and correct token is placed in the environment variable, the <code>Netlify</code> target will do its job:</p>
<p><img src="./images/september2019/buildLog04.png" class="img-fluid" alt="Netlify target did the job" title="Netlify target did the job" /></p>
<p>As usual, there's no mystery - just a sequence of mistakes multiplied by coincidence.</p>
<p>I've been migrating this blog from OctoPress to Wyam the other day. In general, it's a pretty straight-forward process, but I faced with the situation when a combination of little things led to a strange behavior I had to fight with for almost a day. As usual, the devil is in the details, so here is a quick summary about what happened and what I ended up with.</p>http://sklyarenko.net/posts/use-cognitive-services-in-vsts-custom-branch-policiesUse cognitive services in VSTS custom branch policies2018-03-25T10:53:00Z<p>Imagine a situation when an international distributed team works on a project. People speak different languages and often tend to add comments or name things in their native language. What if we can configure a system which analyzes the pull request contents and prevents its completion unless it is in English? Fortunately, it is possible with Microsoft cognitive services API, Azure functions and custom branch policies in VSTS. Let's walk through the process.</p>
<p>There is a detailed article on <a href="https://docs.microsoft.com/en-us/vsts/git/how-to/create-pr-status-server-with-azure-functions">how to use Azure functions to create custom branch policies</a>. I will use it as a starting point.</p>
<p>First of all, let's remove some simplifications, like hard-coded VSTS PAT and later cognitive serivces API key. Those entities can be kept in the Azure key vault as secrets and safely addressed from the Azure function code.</p>
<p>Then, let's replace the primitive sample "starts with [WIP]" check in that sample above with some more sophisticated verification. I'll use Microsoft cognitive services to detect the language of the pull request title. In case the language confidence is higher than 70% I'll assume the text is in English.</p>
<p>Finally, let's post proper pull request status and configure branching policy out of that status, and see how the full solution works.</p>
<h2 id="keep-code-secrets-in-the-azure-key-vault-service-and-access-those-from-azure-function">Keep code secrets in the Azure Key Vault service and access those from Azure Function</h2>
<p>There are official docs about <a href="https://docs.microsoft.com/en-us/azure/azure-stack/user/azure-stack-kv-manage-portal">how to get started with the key vault service</a>. We'll need to create 2 secrets: one for VSTS personal access token (PAT) and another one for API key to be used when connecting to cognitive services. The "create a secret" section under <a href="https://docs.microsoft.com/en-us/azure/azure-stack/user/azure-stack-kv-manage-portal#manage-keys-and-secrets">Manage keys and secrets</a> gives a step-by-step guideline on how to do this. Note that <em>Secret Identifier</em> field - it is required to get the secret value from inside the Azure function code.</p>
<p><img src="./images/march2018/azure_key_vault_secrets.png" class="img-fluid" alt="Azure Key Vault Secrets" title="Azure Key Vault Secrets" /></p>
<p>Now we need to grant our Azure function permissions to read the secrets from the key vault. <a href="https://medium.com/statuscode/getting-key-vault-secrets-in-azure-functions-37620fd20a0b">This great article</a> contains detailed steps on how to achieve this. To tell it short, there are two major points:</p>
<h3 id="enable-managed-service-identiry-for-the-function-app">Enable Managed Service Identiry for the Function App</h3>
<p>As far as I understand, it will make the function app appear as an AD identity for Azure and it will be possible to grant permissions specifically to the function as if it is done for a normal user:</p>
<p><img src="./images/march2018/enable_managed_service_identiry.png" class="img-fluid" alt="Enable Managed Service Identity" title="Enable Managed Service Identity" /></p>
<h3 id="add-an-access-policy-in-the-key-vault-for-the-azure-function">Add an access policy in the key vault for the Azure function</h3>
<p>This will allow the function app to read the secrets in the key vault. The access policy contains a variaty of permissions, but for this sample only <em>Get</em> and <em>List</em> under <em>Secret Management Operations</em> are required:</p>
<p><img src="./images/march2018/add_access_policy.png" class="img-fluid" alt="Add Access Policy" title="Add Access Policy" /></p>
<h3 id="access-key-vault-secrets-from-the-azure-function">Access key vault secrets from the Azure function</h3>
<p>The API required for that reside in the following two NuGet packages, that should be added to the <em>project.json</em> file of the function:</p>
<p><img src="./images/march2018/project_json_keyvault.png" class="img-fluid" alt="Project JSON KeyVault" title="Project JSON KeyVault" /></p>
<p>Then the code itself it quite trivial:</p>
<pre><code class="language-c#">using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
// ...
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var kvClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
string vstsPAT = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/vstsPAT/<GUID_HERE>")).Value;
string apiKey = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/CognitiveServicesAPIkey/<GUID_HERE>")).Value;
</code></pre>
<p>The <em><GUID_HERE></em> token above should be replaced with the real Secret Identifier of each secret.</p>
<h2 id="use-microsoft-cognitive-services-to-detect-the-language-of-the-pull-request-title">Use Microsoft cognitive services to detect the language of the pull request title</h2>
<p><a href="https://azure.microsoft.com/en-us/services/cognitive-services/">Microsoft cognitive services</a> is a bunch of AI-driven Azure services which can do much more than just text analytics. In this article we'll just touch the surface with language part of it. I can higly recommend the <a href="https://app.pluralsight.com/library/courses/microsoft-cognitive-services-text-analytics-api/table-of-contents">Microsoft Cognitive Services: Text Analytics API</a> course on Pluralsight by <a href="http://twitter.com/MCKRUZ">Matt Kruczek</a> if you want to learn some more. In fact, I'll use slightly modified example from that course in this article.</p>
<p>To begin with, a new resource should be instantiated in Azure: Text Analytics API. It is important to choose West US as a location here, no matter which one is closer to you. For some reason, the C# API we'll work with (from Microsoft.ProjectOxford.Text NuGet package) addresses West US API endpoint. <a href="https://stackoverflow.com/a/47961098/274535">This StackOverflow answer</a> helped to understand the root cause.</p>
<p>Once the resource is created, make sure to get the API key (KEY 1 on the image below) and place it to the key vault:</p>
<p><img src="./images/march2018/text_analytics_keys.png" class="img-fluid" alt="Text Analytics Keys" title="Text Analytics Keys" /></p>
<p>As I mentioned above, the Microsoft.ProjectOxford.Text NuGet package is to be used to talk to the language analytics service. Let's add this NuGet package to the <em>project.json</em> of the function, too:</p>
<p><img src="./images/march2018/project_json_oxford.png" class="img-fluid" alt="Project JSON Oxford" title="Project JSON Oxford" /></p>
<p>Finally, the code itself is placed in the private method, which is called from the main function each time we need to detect the language of the text:</p>
<pre><code class="language-c#">private static int GetEnglishConfidence(string text, string apiKey, TraceWriter log)
{
var document = new Document()
{
Id = Guid.NewGuid().ToString(),
Text = text,
};
var englishConfidence = 0;
var client = new LanguageClient(apiKey);
var request = new LanguageRequest();
request.Documents.Add(document);
try
{
var response = client.GetLanguages(request);
var tryEnglish = response.Documents.First().DetectedLanguages.Where(l => l.Iso639Name == "en");
if (tryEnglish.Any())
{
var english = tryEnglish.First();
englishConfidence = (int) (english.Score * 100);
}
}
catch (Exception ex)
{
log.Info(ex.ToString());
}
return englishConfidence;
}
</code></pre>
<p>Note that it is all about sending proper request, and then finding out the language score of the detected languages in the response.</p>
<h2 id="post-pull-request-status-back-to-vsts-pull-request">Post pull request status back to VSTS pull request</h2>
<p>The original guideline I referenced at the beginning of this article contains the code of one other helper method we are going to change: <em>ComputeStatus</em>. Our version of this method will make a call to <em>GetEnglishConfidence</em> method listed above and form proper JSON to post back to VSTS:</p>
<pre><code class="language-c#">private static string ComputeStatus(string pullRequestTitle, string apiKey, TraceWriter log)
{
string state = "failed";
string description = "The PR title is not in English";
if (GetEnglishConfidence(pullRequestTitle, apiKey, log) >= 70)
{
state = "succeeded";
description = "The PR title is in English! Please, proceed!";
}
return JsonConvert.SerializeObject(
new
{
State = state,
Description = description,
Context = new
{
Name = "AIforCI",
Genre = "pr-azure-function-ci"
}
});
}
</code></pre>
<p>Besides, as long as the call to the language analytics service might take some time, we need a method to post initial Pending status:</p>
<pre><code class="language-c#">private static string ComputeInitialStatus()
{
string state = "pending";
string description = "Verifying title language";
return JsonConvert.SerializeObject(
new
{
State = state,
Description = description,
Context = new
{
Name = "AIforCI",
Genre = "pr-azure-function-ci"
}
});
}
</code></pre>
<p>As a result, the most interesting part of the Azure function itself will look like this:</p>
<pre><code class="language-c#">// Post the initial status (pending) while the true one is calculated
PostStatusOnPullRequest(pullRequestId, ComputeInitialStatus(), vstsPAT);
// Post the real status based on the language analysis
PostStatusOnPullRequest(pullRequestId, ComputeStatus(pullRequestTitle, apiKey, log), vstsPAT);
</code></pre>
<h2 id="demo-time-lets-tie-it-all-together">Demo time: let's tie it all together</h2>
<p>I'll assume that all the steps described in the <a href="(https://docs.microsoft.com/en-us/vsts/git/how-to/create-pr-status-server-with-azure-functions)">original guideline about Azure functions and pull requests</a> are completed properly. As a result, VSTS knows how to trigger our Azure function on pull request create and update events.</p>
<p>Let's create the first pull request and let it be titled in pure English:</p>
<p><img src="./images/march2018/new_pr_pending.png" class="img-fluid" alt="Pull Request Status" title="Pull Request Status" /></p>
<p>As soon as the title is verified against the language analytics service, the status changes:</p>
<p><img src="./images/march2018/new_pr_success.png" class="img-fluid" alt="Pull Request Status" title="Pull Request Status" /></p>
<p>Now, if we try to modify the title to some Russian text, the status changes accordingly:</p>
<p><img src="./images/march2018/new_pr_failure.png" class="img-fluid" alt="Pull Request Status" title="Pull Request Status" /></p>
<p>Finally, we can <a href="https://docs.microsoft.com/en-us/vsts/git/how-to/pr-status-policy?view=vsts">make a policy out of the pull request status</a>, and decide whether to block the PR completion based on the language verification result:</p>
<p><img src="./images/march2018/new_pr_policy_success.png" class="img-fluid" alt="Pull Request Policy" title="Pull Request Policy" /></p>
<h2 id="conclusion">Conclusion</h2>
<p>The combination of custom branch policies in VSTS and the power of Azure functions might result in very flexible solutions, limited only by your imagination. Give it a try and tweak your gated CI to comply with your needs.</p>
<p>Here is the full source code of the solution:</p>
<pre><code class="language-c#">#r "Newtonsoft.Json"
using System;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using Newtonsoft.Json;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.ProjectOxford.Text.Core;
using Microsoft.ProjectOxford.Text.Language;
private static string accountName = "[Account Name]"; // Account name
private static string projectName = "[Project Name]"; // Project name
private static string repositoryName = "[Repo Name]"; // Repository name
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
try
{
log.Info("Service Hook Received.");
// Get secrets from key vault
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var kvClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
string vstsPAT = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/vstsPAT/<GUID_HERE>")).Value;
string apiKey = (await kvClient.GetSecretAsync("https://ysmainkeyvault.vault.azure.net/secrets/CognitiveServicesAPIkey/<GUID_HERE>")).Value;
// Get request body
dynamic data = await req.Content.ReadAsAsync<object>();
log.Info("Data Received: " + data.ToString());
// Get the pull request object from the service hooks payload
dynamic jObject = JsonConvert.DeserializeObject(data.ToString());
// Get the pull request id
int pullRequestId;
if (!Int32.TryParse(jObject.resource.pullRequestId.ToString(), out pullRequestId))
{
log.Info("Failed to parse the pull request id from the service hooks payload.");
};
// Get the pull request title
string pullRequestTitle = jObject.resource.title;
log.Info("Service Hook Received for PR: " + pullRequestId + " " + pullRequestTitle);
// Post the initial status (pending) while the true one is calculated
PostStatusOnPullRequest(pullRequestId, ComputeInitialStatus(), vstsPAT);
// Post the real status based on the language analysis
PostStatusOnPullRequest(pullRequestId, ComputeStatus(pullRequestTitle, apiKey, log), vstsPAT);
return req.CreateResponse(HttpStatusCode.OK);
}
catch (Exception ex)
{
log.Info(ex.ToString());
return req.CreateResponse(HttpStatusCode.InternalServerError);
}
}
private static void PostStatusOnPullRequest(int pullRequestId, string status, string pat)
{
string Url = string.Format(
@"https://{0}.visualstudio.com/{1}/_apis/git/repositories/{2}/pullrequests/{3}/statuses?api-version=4.0-preview",
accountName,
projectName,
repositoryName,
pullRequestId);
using (HttpClient client = new HttpClient())
{
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(
ASCIIEncoding.ASCII.GetBytes(
string.Format("{0}:{1}", "", pat))));
var method = new HttpMethod("POST");
var request = new HttpRequestMessage(method, Url)
{
Content = new StringContent(status, Encoding.UTF8, "application/json")
};
using (HttpResponseMessage response = client.SendAsync(request).Result)
{
response.EnsureSuccessStatusCode();
}
}
}
private static int GetEnglishConfidence(string text, string apiKey, TraceWriter log)
{
var document = new Document()
{
Id = Guid.NewGuid().ToString(),
Text = text,
};
var englishConfidence = 0;
var client = new LanguageClient(apiKey);
var request = new LanguageRequest();
request.Documents.Add(document);
try
{
var response = client.GetLanguages(request);
var tryEnglish = response.Documents.First().DetectedLanguages.Where(l => l.Iso639Name == "en");
if (tryEnglish.Any())
{
var english = tryEnglish.First();
englishConfidence = (int) (english.Score * 100);
}
}
catch (Exception ex)
{
log.Info(ex.ToString());
}
return englishConfidence;
}
private static string ComputeInitialStatus()
{
string state = "pending";
string description = "Verifying title language";
return JsonConvert.SerializeObject(
new
{
State = state,
Description = description,
Context = new
{
Name = "AIforCI",
Genre = "pr-azure-function-ci"
}
});
}
private static string ComputeStatus(string pullRequestTitle, string apiKey, TraceWriter log)
{
string state = "failed";
string description = "The PR title is not in English";
if (GetEnglishConfidence(pullRequestTitle, apiKey, log) >= 70)
{
state = "succeeded";
description = "The PR title is in English! Please, proceed!";
}
return JsonConvert.SerializeObject(
new
{
State = state,
Description = description,
Context = new
{
Name = "AIforCI",
Genre = "pr-azure-function-ci"
}
});
}
</code></pre>
<p>Imagine a situation when an international distributed team works on a project. People speak different languages and often tend to add comments or name things in their native language. What if we can configure a system which analyzes the pull request contents and prevents its completion unless it is in English? Fortunately, it is possible with Microsoft cognitive services API, Azure functions and custom branch policies in VSTS. Let's walk through the process.</p>http://sklyarenko.net/posts/transform-trello-list-into-markdown-fileTransform Trello list into Markdown file2018-02-20T21:38:00Z<p>I really enjoy reading. And I think I read a lot. I'm sure there are people who read much more, but... well, that's not what I was going to tell you.</p>
<p>One day I realized that some book recommendations I come across got lost in my memory, so I started a special <a href="https://trello.com">Trello</a> board. Basically, when I get a book recommendation from a person I respect, I add a new card to the initial <em>To Read</em> list of that board. When a certain book is read, I write a short review into the Description of the appropriate card and move it to final <em>Done</em> list.</p>
<p><img src="./images/february2018/trello_reading_board.png" class="img-fluid" alt="Trello Reading Board" title="Trello Reading Board" /></p>
<p>Thus, the <em>Done</em> list gets populated with (quite subjective) book reviews. I thought it might be a good idea to post those reviews as a separate article. One day.</p>
<p>As long as compiling long lists out of dozens small snippets is a boring task, it turns out to be a good occasion to play with the Trello API.</p>
<p>So, the idea is to have a PowerShell script which will iterate over the cards in the list and generate a nice Markdown file.</p>
<h2 id="prerequisites-app-key-and-authorization">Prerequisites: app key and authorization</h2>
<p>First of all, you should acquire the app key - the entity required for all subsequent operations. Simply head over to <a href="https://trello.com/app-key">https://trello.com/app-key</a> to get this API key. Let's put it into the variable.</p>
<pre><code class="language-powershell">$apiKey = "LongSequenceOfCharsWhichIsBasicallyAnApiKey"
</code></pre>
<p>Real-world applications will need to ask each user to authorize the application, but as long as we are just playing with it locally, let's manually generate a token. Open this URL in your browser:</p>
<pre><code class="language-URL">https://trello.com/1/authorize?expiration=never&scope=read&response_type=token&name=Server%20Token&key=<PASTE_ABOVE_KEY_HERE>
</code></pre>
<p>Note that we specify the expiration term (never) and the token scope (read only) here. Copy the token when it appears on the page and save into another variable.</p>
<pre><code class="language-powershell">$token = "EvenLongerSequenceOfCharsWhichIsGuessWhatRightTheToken"
</code></pre>
<p>We'll need both values for any REST API request, so let's make our life easier:</p>
<pre><code class="language-powershell">$authSuffix = "key=$apiKey&token=$token"
</code></pre>
<h2 id="get-the-board-id-to-work-with">Get the Board ID to work with</h2>
<p>Okay, the preparations are over, and it's time to get the ID of the board we'll work with.</p>
<pre><code class="language-powershell">$response = Invoke-RestMethod -Uri "https://api.trello.com/1/members/me?boards=open&board_fields=name&$authSuffix" -Method Get
$boardId = $response.Boards | where name -EQ "Reading" | select -ExpandProperty id
</code></pre>
<p>The URL instructs the API to get all my open boards, which is then piped through <em>where</em> filter to get the one called "Reading".</p>
<h2 id="get-the-labels-used-on-the-board">Get the Labels used on the Board</h2>
<p>When I finish a book, I mark it with one of the following labels:</p>
<ul>
<li>Green: Recommended</li>
<li>Blue: Indifferent</li>
<li>Yellow: OK for one-time reading</li>
<li>Red: Waste of time</li>
</ul>
<p>We'll format the subheaders of each book review depending on the label it has. For instance, recommended books will be highlighted in <strong>bold</strong>, while clearly not recommended will be <del>struck though</del>. Let's get the list of those labels:</p>
<pre><code class="language-powershell">$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?labels=all&label_fields=color&$authSuffix" -Method Get
$labels = $response.Labels | Convert-ArrayToHashTable
</code></pre>
<p>As a result, we have a hashtable - <em>LabelId : LabelColor</em>.</p>
<h2 id="get-the-list-id-containing-the-cards">Get the List ID containing the Cards</h2>
<p>Trello cards live inside the lists, so let's get the <em>Done</em> list. As long as I know that the necessary list is the last one, it's a bit easier scripted:</p>
<pre><code class="language-powershell">$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?lists=open&list_fields=id,name&$authSuffix" -Method Get
$listId = $response.Lists | select -Last 1 -ExpandProperty id
</code></pre>
<h2 id="get-the-cards-from-the-list">Get the Cards from the List</h2>
<p>Now, when we have the list ID, we are just one call away from getting the collection of cards. Note that we only get necessary fields - in this case <strong>Title</strong> (subheader), <strong>Label</strong> (subheader formatting) and <strong>Description</strong> (actual text of the review).</p>
<pre><code class="language-powershell">$response = Invoke-RestMethod -Uri "https://api.trello.com/1/lists/$($listId)?cards=all&card_fields=name,desc,idLabels&$authSuffix" -Method Get
$cards = $response.Cards | select @{Name="Title"; Expression={$_.name}},
@{Name="Label"; Expression={$labels[$_.idLabels[0]]}},
@{Name="Description"; Expression={$_.desc}}
</code></pre>
<h2 id="and-a-bit-of-formatting-magic-for-dessert">And a bit of formatting magic for dessert</h2>
<p>Finally, the collection of cards is transformed to become a nicely formatted Markdown document:</p>
<pre><code class="language-powershell">Convert-TableToMarkdown $cards
</code></pre>
<p>Run the script, and you'll get a markdown document, similar to this one:</p>
<p><img src="./images/february2018/result_markdown.png" class="img-fluid" alt="Result Markdown" title="Result Markdown" /></p>
<h2 id="the-full-listing-of-the-powershell-script">The full listing of the PowerShell script</h2>
<p>Here is the script I ended up with, including some under-the-hood formatting magic:</p>
<pre><code class="language-powershell">Function Convert-ArrayToHashTable
{
begin { $hash = @{} }
process { $hash[$_.id] = $_.color }
end { return $hash }
}
function Get-WrapperByLabel
(
[Parameter(Mandatory=$true)] $label
)
{
switch ($label) {
"red" { "~~" }
"green" { "**" }
"blue" { "*" }
Default { "" }
}
}
Function Convert-TableToMarkdown
(
[Parameter(Mandatory=$true)] $books
)
{
$filePath = "$PSScriptRoot\result.md"
foreach ($book in $books) {
$wrapper = Get-WrapperByLabel $book.Label
"## $wrapper$($book.Title)$wrapper" | Out-File -FilePath $filePath -Encoding unicode -Append
"" | Out-File -FilePath $filePath -Encoding unicode -Append
"$($book.Description)" | Out-File -FilePath $filePath -Encoding unicode -Append
}
}
# Head over to https://trello.com/app-key to get this API key
$apiKey = "LongSequenceOfCharsWhichIsBasicallyAnApiKey"
# Use this shortcut for local test code: https://trello.com/1/authorize?expiration=never&scope=read&response_type=token&name=Server%20Token&key=<PASTE_ABOVE_KEY_HERE>
$token = "EvenLongerSequenceOfCharsWhichIsGuessWhatRightTheToken"
# shape the authentication suffix to append to each URI in REST calls
$authSuffix = "key=$apiKey&token=$token"
# get the ID of the target board
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/members/me?boards=open&board_fields=name&$authSuffix" -Method Get
$boardId = $response.Boards | where name -EQ "Reading" | select -ExpandProperty id
# get the labels used on the board
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?labels=all&label_fields=color&$authSuffix" -Method Get
$labels = $response.Labels | Convert-ArrayToHashTable
# get the last list (which contains the items I'm done reading)
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/boards/$($boardId)?lists=open&list_fields=id,name&$authSuffix" -Method Get
$listId = $response.Lists | select -Last 1 -ExpandProperty id
# get the list of cards (necessary fields only)
$response = Invoke-RestMethod -Uri "https://api.trello.com/1/lists/$($listId)?cards=all&card_fields=name,desc,idLabels&$authSuffix" -Method Get
$cards = $response.Cards | select @{Name="Title"; Expression={$_.name}},
@{Name="Label"; Expression={$labels[$_.idLabels[0]]}},
@{Name="Description"; Expression={$_.desc}}
Convert-TableToMarkdown $cards
</code></pre>
<p>P.S. The <a href="https://trello.com">Trello</a> REST API is clear, concise and well-documented <a href="https://developers.trello.com/reference">here</a>. Great job!</p>
<p>I really enjoy reading. And I think I read a lot. I'm sure there are people who read much more, but... well, that's not what I was going to tell you.</p>http://sklyarenko.net/posts/vsts-and-teamcity-commit-status-publisherVSTS and TeamCity Commit Status Publisher2017-11-06T22:24:00Z<p>Some time ago VSTS team added a feature called <a href="https://docs.microsoft.com/en-us/vsts/release-notes/2017/aug-04-team-services#pull-request-status-extensibility-in-public-preview">Pull Request Status Extensibility</a>. It unlocked the door for the external services to post custom statuses to the pull requests created in Git repositories hosted in VSTS. Once the status is posted, it is possible to make a branching policy out of it, and this fact makes it a powerful feature.</p>
<blockquote class="blockquote">
<p>According to the <a href="https://docs.microsoft.com/en-us/vsts/release-notes/index">VSTS Feature Timeline</a> Pull Request Status Extensibility will arrive in On-Prem TFS 2018 RC1 and future.</p>
</blockquote>
<p>Fortunately, TeamCity has just added the option to send pull request statuses to its <a href="https://confluence.jetbrains.com/display/TCD10/Commit+Status+Publisher">Commit Status Publisher</a> in the most recent build of the version 2017.2.</p>
<blockquote class="blockquote">
<p>At the moment of writing this post, the version 2017.2 is still an EAP, and I'll use <a href="https://blog.jetbrains.com/teamcity/2017/11/teamcity-2017-2-eap4-is-available/">2017.2 EAP4</a> as the first build the feature has arrived with.</p>
</blockquote>
<p>These two pieces assemble in a nice picture where you can host your project in VSTS while keeping the build part entirely in TeamCity. In this post, I'll guide you through the steps required to configure this beautiful setup.</p>
<h2 id="teamcity-basic-setup-of-the-build-project">TeamCity: basic setup of the build project</h2>
<p>To begin with, we'll add <a href="https://confluence.jetbrains.com/display/TCD10/Integrating+TeamCity+with+VCS+Hosting+Services#IntegratingTeamCitywithVCSHostingServices-ConnectingtoVisualStudioTeamServices">a connection to VSTS</a> in TeamCity. It is not required, but helps a lot in the further configuration of VCS root and build features. Navigate to <strong>Administration > <Root Project> > Connections</strong> and click "Add Connection" button:</p>
<p><img src="./images/november2017/tc_connection.png" class="img-fluid" alt="TeamCity connection" title="TeamCity connection" /></p>
<p>Now, let's create a new build project. Thanks to the connection configured prior to this step, the VCS root configuration is as easy as clicking a Visual Studio icon:</p>
<p><img src="./images/november2017/tc_vcsroot.png" class="img-fluid" alt="TeamCity VCS root" title="TeamCity VCS root" /></p>
<p>Choose the repository we'd like to target and TeamCity will form the proper clone URL. Note that Branch Specification field is set to watch pull requests too.</p>
<p>For the sake of this demo the build project itself is quite simple: it contains just one build configuration, which in its turn consists of a single PowerShell build step faking the real build process by several seconds sleep. There's also a VCS trigger to run the build on the changes in default branch (<code>+:<default></code>) as well as pull request merges (<code>+:pull/*/merge</code>).</p>
<p>Finally, we should configure the Commit Status Publisher, which does all the magic. Switch to the Build Feature on the left pane, and click "Add Build Feature" button:</p>
<p><img src="./images/november2017/tc_commit_status_publisher.png" class="img-fluid" alt="TeamCity Commit Status Publisher" title="TeamCity Commit Status Publisher" /></p>
<p>Note the checkbox that hides in the Advanced Options view. It should be turned on in order to enable pull request status publishing.</p>
<blockquote class="blockquote">
<p>Ideally, you should generate another personal access token in VSTS with only <code>Code (status)</code> and <code>Code (read)</code> scopes specified. However, being lazy, I've just clicked the <strong>magic wand</strong> icon and TeamCity pulled the <strong>all-scopes</strong> access token from the connection.</p>
</blockquote>
<h2 id="vsts-creating-a-pull-request-with-status-from-teamcity">VSTS: creating a pull request with status from TeamCity</h2>
<p>Now, when we're done with TeamCity configuration, let's go ahead and create a pull request in out VSTS Git repository. When TeamCity detects the change, it starts building the pull request. At the same time, the pull request view in VSTS displays appropriate status:</p>
<p><img src="./images/november2017/vsts_pr_status_building.png" class="img-fluid" alt="VSTS PR Status" title="VSTS PR Status" /></p>
<p>Once the build has completed, the status is refreshed:</p>
<p><img src="./images/november2017/vsts_pr_status_success.png" class="img-fluid" alt="VSTS PR Status" title="VSTS PR Status" /></p>
<p>If you click the link, it navigates to the completed build page in TeamCity:</p>
<p><img src="./images/november2017/tc_pr_success.png" class="img-fluid" alt="TC PR Status" title="TC PR Status" /></p>
<h2 id="vsts-make-branch-policy-out-of-the-teamcity-build-status">VSTS: Make branch policy out of the TeamCity build status</h2>
<p>As long as the external service has published its status to the pull request once, it is possible to configure it to serve as a branch policy for this and all other pull requests in this repository. Let's do this now.</p>
<p>Navigate to the branch policies of the <code>master</code> branch and click "Add Service" in "Require approval from external services" section:</p>
<p><img src="./images/november2017/vsts_branch_policy.png" class="img-fluid" alt="VSTS Branch Policy" title="VSTS Branch Policy" /></p>
<p>Choose the target service from the dropdown (its name is combined of TeamCity build project and configuration) and modify other options according to your needs. Note that it is possible to configure the service the way it behaves as a normal branch policy. For example, the status can be required and will expire when the source branch gets an update:</p>
<p><img src="./images/november2017/vsts_add_service.png" class="img-fluid" alt="VSTS Add Service" title="VSTS Add Service" /></p>
<p>Finally, click Save and push some other change to the existing pull request. As soon as the pull request is updated, the <code>Status</code> section disappears and a new policy is displayed. It stays in the waiting mode until the TeamCity build is started:</p>
<p><img src="./images/november2017/vsts_policy_new.png" class="img-fluid" alt="VSTS Policy Status" title="VSTS Policy Status" /></p>
<p>Once the build is started, the policy status changes to <code>Pending</code>:</p>
<p><img src="./images/november2017/vsts_policy_building.png" class="img-fluid" alt="VSTS Policy Status" title="VSTS Policy Status" /></p>
<p>Finally, when the build is done, it is also reflected on the custom policy status:</p>
<p><img src="./images/november2017/vsts_policy_success.png" class="img-fluid" alt="VSTS Policy Status" title="VSTS Policy Status" /></p>
<p>Similar to the pull request status behavior, it is possible to click the link and navigate to the build view in TeamCity.</p>
<h2 id="teamcity-build-normal-branches-and-post-the-status-back-to-vsts">TeamCity: build normal branches and post the status back to VSTS</h2>
<p>When we merge the pull request, the build of the master branch is triggered in TeamCity. If you switch to the Branches view in VSTS, you can see the <code>In Progress</code> type of icon in the Build column of the master branch:</p>
<p><img src="./images/november2017/vsts_master_building.png" class="img-fluid" alt="VSTS Master" title="VSTS Master" /></p>
<p>Once the build is completed, the icon changes to the appropriate state (<code>Success</code> in our case):</p>
<p><img src="./images/november2017/vsts_master_success.png" class="img-fluid" alt="VSTS Master" title="VSTS Master" /></p>
<h2 id="conclusion">Conclusion</h2>
<p>In this article, we've quickly run through the steps required to configure close integration between VSTS Git repository and TeamCity build project. Note that I haven't written a single line of code for this to happen. This setup might be useful for those projects that have extensive build configuration in TeamCity, but would like to benefit from the fantastic pull request user experience in VSTS.</p>
<p>Some time ago VSTS team added a feature called <a href="https://docs.microsoft.com/en-us/vsts/release-notes/2017/aug-04-team-services#pull-request-status-extensibility-in-public-preview">Pull Request Status Extensibility</a>. It unlocked the door for the external services to post custom statuses to the pull requests created in Git repositories hosted in VSTS. Once the status is posted, it is possible to make a branching policy out of it, and this fact makes it a powerful feature.</p>http://sklyarenko.net/posts/err-connection-refusedERR_CONNECTION_REFUSED2015-08-27T22:19:00Z<p>Today I have faced with the problem installing Basic Authentication feature into the Web Server role on Windows 2012 R2. The wizard kept throwing various errors, including scary OutOfMemoryException. A quick googling that has found a suggestion to run <code>netsh http show iplisten</code> and add 127.0.0.1 (aka Home Sweet Home) to the list if it's not there. I gave it a try without giving it a thought first.</p>
<p>The initial problem has not been solved - the wizard kept failing to add that feature, and I finally resolved it with the mighty PowerShell:</p>
<pre><code class="language-powershell">Import-Module ServerManager
Add-WindowsFeature Web-Basic-Auth
</code></pre>
<p>Later on I had to browse for a website hosted on that server, and I suddenly saw <em>This webpage is not available</em> message. Hmm... First off, I've verified that the website works locally - and it did. This gave me another hint and I checked whether the bindings are set up correctly. And they were! Finally, I started to think that it's Basic Authentication feature to blame - yeah, I know, that was a stupid assumption, but hey, stupid assumptions feel very comfortable for brain when it faces with the magic...</p>
<p>Anyway, fortunately I recalled that quick dumb action I did with netsh, and the magic has immediately turned into the dust, revealing someone's ignorance... Turns out, if <code>iplisten</code> does not list anything, it means listen to everything, any IP address. When you add something there, it starts listening to that IP address only.</p>
<p>Thus, it was all resolved by deleting 127.0.0.1 from that list with <code>netsh http delete iplisten ipaddress=127.0.0.1</code>.</p>
<p>Want some quick conclusion? <strong>Think first, then act!!!</strong></p>
<p>Today I have faced with the problem installing Basic Authentication feature into the Web Server role on Windows 2012 R2. The wizard kept throwing various errors, including scary OutOfMemoryException. A quick googling that has found a suggestion to run <code>netsh http show iplisten</code> and add 127.0.0.1 (aka Home Sweet Home) to the list if it's not there. I gave it a try without giving it a thought first.</p>http://sklyarenko.net/posts/build-queue-starvation-in-cc-dot-netBuild queue starvation in CC.NET2014-11-24T22:54:00Z<p>Recently I've come across an interesting behavior in CruiseControl.NET in regards to the build queues and priorities.</p>
<p>If there are many projects on one server, and the server it not quite powerful, and more than one build queue is configured, and (that's the last one) these build queues have different priorities, you might end up in a situation when CC.NET checks for modifications the same set of projects all over again, and never starts an actual build. If you add the projects from that server to the CCTray, you can observe the number of projects queued for the build has reached a certain number and never decreases.</p>
<p>This phenomenon is call "build queue starvation". It was <a href="http://www.damirscorner.com/AvoidingQueueStarvationInCruiseControlNET.aspx">described and explained by Damir Arh in his blog</a>.</p>
<p>Let me summarize the main idea.</p>
<p>When one build queue has a higher priority than another queue, CC.NET favors the projects from the first queue when scheduling for modifications check. Now imagine that a trigger of the projects from the higher priority queue is quite small and the number of such projects is big enough. This leads to the situation when the first project in high priority queue is scheduled for build the second round before the last project in that queue has built its first time.</p>
<p>As a result, the lower priority queue is "starving" - none of its projects ever gets a chance to be built. The fix suggested in the link above suits my needs - the trigger interval has just been increased.</p>
<p>I should say it's not easy to google that if you're not familiar with the term <strong>build queue starvation</strong>. Besides, CC.NET doesn't feel bad in this situation, and hence doesn't help with any warnings - it just does its job iterating the queue and following the instructions.</p>
<p>Recently I've come across an interesting behavior in CruiseControl.NET in regards to the build queues and priorities.</p>http://sklyarenko.net/posts/setting-up-an-existing-blog-on-octopressSetting up an existing blog on Octopress2014-08-01T00:18:00Z<p>Ok, it took me some time and efforts to set up the environment for blogging. Consider this post as a quick instruction to myself for the next time I'll have to do this.</p>
<p>So, there's an existing blog created with <a href="http://octopress.org/">Octopress</a>, hosted on <a href="https://github.com/">Github</a>. The task is to setup a brand new machine to enable smooth blogging experience.</p>
<blockquote class="blockquote">
<p>Note: just in case you have to create a blog from scratch, follow the official <a href="http://octopress.org/docs/setup/">Octopress docs</a>, it's quite clear.</p>
</blockquote>
<p>First of all, you should install Ruby. Octopress docs recommend using either rbenv or RVM for this. Both words sound scary, hence don't hesitate to take the easy path and download an installer from <a href="http://dl.bintray.com/oneclick/rubyinstaller/rubyinstaller-1.9.3-p545.exe?direct">here</a>. At the last page of the installation wizard, choose to add Ruby binaries to the <code>PATH</code>:</p>
<p><img src="./images/20140801_install_ruby.png" class="img-fluid" alt="Install Ruby" title="Install Ruby" /></p>
<p>When installer completes, check the installed version:</p>
<pre><code class="language-BAT">ruby --version
</code></pre>
<p>Then, clone the repo with the blog from Github. Instead of calling <code>rake setup_github_pages</code> as suggested by the Octopress docs, follow these steps found <a href="http://tech.paulcz.net/2012/12/creating-a-github-pages-blog-with-octopress.html">here</a>. Let's assume we've done that into <code>blog</code> folder:</p>
<pre><code class="language-BAT">git clone git@github.com:username/username.github.com.git blog
cd blog
git checkout source
mkdir _deploy
cd _deploy
git init
git remote add origin git@github.com:username/username.github.com.git
git pull origin master
cd ..
</code></pre>
<p>Now do the following:</p>
<pre><code class="language-BAT">gem install bundler
bundle install
</code></pre>
<p>This should pull all the dependencies required for the Octopress engine. Here's where I faced with the first inconsistency in the docs - one of the dependencies (fast-stemmer) fails to install without <a href="https://github.com/downloads/oneclick/rubyinstaller/DevKit-tdm-32-4.5.2-20111229-1559-sfx.exe">the DevKit</a>. Download it and run the installer. The installation process is documented <a href="https://github.com/oneclick/rubyinstaller/wiki/Development-Kit">here</a>, but the quickest way is:</p>
<ul>
<li>self-extract the archive</li>
<li><code>cd</code> to that folder</li>
<li>run <code>ruby dk.rb init</code></li>
<li>then run <code>ruby dk.rb install</code></li>
</ul>
<p>After this, re-run the <code>bundle install</code> command.</p>
<p>Well, at this point you should be able to create new posts with <code>rake new_post[title]</code> command. Generate the resulting HTML with <code>rake generate</code> and preview it with <code>rake preview</code> to make sure it produces what you expect.</p>
<p><em><strong>An important note about syntax highlighting</strong></em></p>
<p>Octopress uses <a href="http://pygments.org/">Pygments</a> to highlight the code. This is a <a href="https://www.python.org/">Python</a> thing, and obviously you should install Python for this to work. Choose <a href="https://www.python.org/ftp/python/2.7.8/python-2.7.8.msi">2.x version of Python</a> - the 3.x version doesn't work. This is important: you won't be able to generate HTML from MARKDOWN otherwise.</p>
<p>That's it! Hope this will save me some time in future.</p>
<p>Ok, it took me some time and efforts to set up the environment for blogging. Consider this post as a quick instruction to myself for the next time I'll have to do this.</p>http://sklyarenko.net/posts/migrate-attachments-from-ontime-to-tfsMigrate attachments from OnTime to TFS2014-07-31T23:32:00Z<p>When you move from one bug tracking system to another, the accuracy of the process is very important. A single missing point can make a work item useless. An attached image is often worth a thousand words. Hence, today's post is about migrating attachments from <a href="http://www.axosoft.com/">OnTime</a> to <a href="http://en.wikipedia.org/wiki/Team_Foundation_Server">TFS</a>.</p>
<blockquote class="blockquote">
<p>NOTE: The samples in this post rely on OnTime SDK, which was replaced by a <a href="http://developer.axosoft.com/api">brand new REST API</a>.</p>
</blockquote>
<p>OnTime SDK is a set of web services, and each "area" is usually covered by one or a number of web services. The operations with attachments are grouped in <code>/sdk/AttachmentService.asmx</code> web service.</p>
<p>So, the first thing to do is to grab all attachments of the OnTime defect:</p>
<pre><code class="language-c#">var rawAttachments = _attachmentService.GetAttachmentsList(securityToken, AttachmentSourceTypes.Defect, defect.DefectId);
</code></pre>
<p>This method returns a <code>DataSet</code>, and you'll have to enumerate its rows to grab the useful data:</p>
<pre><code class="language-c#">var attachments = rawAttachments.Tables[0].AsEnumerable();
foreach (var attachment in attachments)
{
// wi is a TFS work item object
wi.Attachments.Add(GetAttachment(attachment));
}
</code></pre>
<p>Now, let's take a look at the <code>GetAttachment</code> method, which actually does the job. It accepts the <code>DataRow</code>, and returns the TFS <code>Attachment</code> object:</p>
<pre><code class="language-c#">private Attachment GetAttachment(DataRow attachmentRow)
{
var onTimeAttachment = _attachmentService.GetByAttachmentId(securityToken, (int)attachmentRow["AttachmentId"]);
var tempFile = Path.Combine(Path.GetTempPath(), onTimeAttachment.FileName);
if (File.Exists(tempFile))
File.Delete(tempFile);
File.WriteAllBytes(tempFile, onTimeAttachment.FileData);
return new Attachment(tempFile, onTimeAttachment.Description);
}
</code></pre>
<p>Couple of things to notice here:</p>
<ul>
<li>you have to call another web method to pull binary data of the attachment</li>
<li>OnTime attachment metadata is rather useful and can be moved as is to TFS, for instance, attachment description</li>
</ul>
<p>Finally, when a new attachment is added to the TFS work item, "increment" the <code>ChangedDate</code> of the work item before saving it. The TFS server often refuses saving work item data in case the previous revision has exactly the same date/time stamp. Like this (always works):</p>
<pre><code class="language-c#">wi[CoreField.ChangedDate] = wi.ChangedDate.AddSeconds(5);
wi.Save();
</code></pre>
<p>Hope it's useful. Good luck!</p>
<p>When you move from one bug tracking system to another, the accuracy of the process is very important. A single missing point can make a work item useless. An attached image is often worth a thousand words. Hence, today's post is about migrating attachments from <a href="http://www.axosoft.com/">OnTime</a> to <a href="http://en.wikipedia.org/wiki/Team_Foundation_Server">TFS</a>.</p>http://sklyarenko.net/posts/nant-task-behaves-differently-in-092NAnt <copy> task behaves differently in 0.92 and prior versions2013-01-03T11:35:00Z<p>If you need to copy a folder together with all its contents to another folder in NAnt, you would typically write something like this:</p>
<pre><code class="language-XML"><copy todir="${target}">
<fileset basedir="${source}" />
</copy>
</code></pre>
<p>It turns out this code works correctly in NAnt 0.92 Alpha and above. The output is expected:</p>
<blockquote class="blockquote">
<p>[copy] Copying 1 directory to '...'.</p>
</blockquote>
<p>However, the same code doesn't work in prior versions of NAnt, for instance, 0.91. The output is as follows (only in <code>–debug+</code> mode):</p>
<blockquote class="blockquote">
<p>[copy] Copying 0 files to '...'.</p>
</blockquote>
<p>Obviously, <a href="https://github.com/nant/nant/issues/11">the issue was fixed in 0.92</a>, so the best recommendation would be to upgrade NAnt toolkit. However, if this is not an option for some reason, the following code seems to work correctly for any version:</p>
<pre><code class="language-XML"><copy todir="${target}">
<fileset basedir="${source}">
<include name="**/*" />
</fileset>
</copy>
</code></pre>
<p>Hope this saves you some time.</p>
<p>If you need to copy a folder together with all its contents to another folder in NAnt, you would typically write something like this:</p>http://sklyarenko.net/posts/possible-source-of-signtool-bad-formatPossible source of the signtool 'bad format' 0x800700C1 problem2012-11-19T17:26:00Z<p>Today I have faced with a weird problem. The operation to sign the EXE file (actually, an installation package) with a valid certificate failed with the following error:</p>
<pre><code class="language-LOG">[exec] SignTool Error: SignedCode::Sign returned error: 0x800700C1
[exec] Either the file being signed or one of the DLL specified by /j switch is not a valid Win32 application.
[exec] SignTool Error: An error occurred while attempting to sign: D:\output\setup.exe
</code></pre>
<p>This kind of error is usually an indication of a format incompatibility, <a href="http://technet.microsoft.com/en-us/library/cc782541(WS.10).aspx">when the bitness of the signtool.exe and the bitness of the EXE in question don’t correspond</a>. However, this was not the case.</p>
<p>It turns out that the original EXE file was generated incorrectly because of the lack of disk space. That's why it was broken and was recognized by the signtool like a bad format file. After disk cleanup everything worked perfectly and the EXE file was signed correctly.</p>
<p>Hope this saves someone some time.</p>
<p>Today I have faced with a weird problem. The operation to sign the EXE file (actually, an installation package) with a valid certificate failed with the following error:</p>http://sklyarenko.net/posts/a-solution-can-build-fine-from-insideA solution can build fine from inside the Visual Studio, but fail to build with msbuild.exe2012-10-29T16:35:00Z<p>Today I have faced with an interesting issue. Although I failed to reproduce it on a fresh new project, I think this info might be useful for others.</p>
<p>I have a solution which was upgraded from targeting .NET Framework 2.0 to .NET Framework 3.5. I've got a patch from a fellow developer to apply to one of the projects of that solution. The patch adds new files as well as modifies existing ones. After the patch application, the solution is successfully built from inside the Visual Studio, but fails to build from the command line with msbuild.exe. The error thrown states that</p>
<blockquote class="blockquote">
<p>"The type or namespace name 'Linq' does not exist in the namespace 'System' ".</p>
</blockquote>
<p>The msbuild version is 3.5:</p>
<pre><code class="language-LOG">[exec] Microsoft (R) Build Engine Version 3.5.30729.5420
[exec] [Microsoft .NET Framework, Version 2.0.50727.5456]
[exec] Copyright (C) Microsoft Corporation 2007. All rights reserved.
</code></pre>
<p>It turns out this issue has been met by other people, and even reported to Microsoft. Microsoft suggested to use MSBuild.exe 4.0 to build VS 2010 projects. However, they confirmed it is possible to use MSBuild.exe 3.5 - in this case a reference to <code>System.Core</code> (3.5.0.0) must be explicitly added to the <code>csproj</code> file.
If you try to add a reference to <code>System.Core</code> from inside the Visual Studio, you'll get the error saying:</p>
<blockquote class="blockquote">
<p>"A reference to 'System.Core' could not be added. This component is already automatically referenced by the build system"</p>
</blockquote>
<p>So, it seems that when you build a solution from inside the Visual Studio, it is capable to automatically load implicitly referenced assemblies. I suppose, MSBuild.exe 4.0 (and even SP1-patched MSBuild.exe 3.5?) can do this as well. Apparently, this has also turned out to be a known problem – you can't add that reference from the IDE. Open <code>csproj</code> file in your favorite editor and add this:</p>
<pre><code class="language-XML"><Reference Include="System.Core" />
</code></pre>
<p>After this, the project builds fine in both VS and MSBuild.</p>
<p>Today I have faced with an interesting issue. Although I failed to reproduce it on a fresh new project, I think this info might be useful for others.</p>http://sklyarenko.net/posts/default-attribute-values-for-customDefault attribute values for custom NAnt tasks2012-08-15T16:35:00Z<p>When you create custom <a href="http://nant.sourceforge.net/">NAnt</a> tasks, you can specify various task parameter characteristics, such as whether it is a required attribute, how it validates its value, etc. This is done via the custom attributes in .NET, for example:</p>
<pre><code class="language-csharp">[TaskAttribute("param", Required = true), StringValidator(AllowEmpty = false)]
public string Param { get; set; }
</code></pre>
<p>It might be a good idea to be able to specify a default value for a task parameter the similar way, for instance:</p>
<pre><code class="language-csharp">[TaskAttribute("port"), Int32Validator(1000, 65520), DefaultValue(16333)]
public int Port { get; set; }
</code></pre>
<p>Let's examine the way it can be implemented. First of all, let's define the custom attribute for the default value:</p>
<pre><code class="language-csharp">/// <summary>
/// The custom attribute for the task attribute default value
/// </summary>
public class DefaultValueAttribute : Attribute
{
public DefaultValueAttribute(object value)
{
this.Default = value;
}
public object Default { get; set; }
}
</code></pre>
<p>I suppose the <a href="http://msdn.microsoft.com/en-us/library/system.componentmodel.defaultvalueattribute.aspx">standard .NET <code>DefaultValueAttribute</code></a> can be used for this purpose as well, but the one above is very simple and is good for this sample. Note also that in this situation we could benefit from the generic custom attributes, <a href="http://stackoverflow.com/questions/294216/why-does-c-sharp-forbid-generic-attribute-types">which unfortunately are not supported in C#, although are quite valid for CLR</a>.</p>
<p>Now, when the attribute is defined, let's design the way default values will be applied at runtime. For this purpose we'll have to define a special base class for all our custom tasks we'd like to use default values technique:</p>
<pre><code class="language-csharp">public abstract class DefaultValueAwareTask : Task
{
protected override void ExecuteTask()
{
this.SetDefaultValues();
}
protected virtual void SetDefaultValues()
{
foreach (var property in GetPropertiesWithCustomAttributes<DefaultValueAttribute>(this.GetType()))
{
var attribute = (TaskAttributeAttribute)property.GetCustomAttributes(typeof(TaskAttributeAttribute), false)[0];
var attributeDefaultValue = (DefaultValueAttribute)property.GetCustomAttributes(typeof(DefaultValueAttribute), false)[0];
if (attribute.Required)
{
throw new BuildException("No reason to allow both to be set", this.Location);
}
if (this.XmlNode.Attributes[attribute.Name] == null)
{
property.SetValue(this, attributeDefaultValue.Default, null);
}
}
}
private static IEnumerable<PropertyInfo> GetPropertiesWithCustomAttributes<T>(Type type)
{
return type.GetProperties(BindingFlags.DeclaredOnly | BindingFlags.Public | BindingFlags.Instance).Where(property => property.GetCustomAttributes(typeof(T), false).Length > 0);
}
}
</code></pre>
<p>Let's examine what this code actually does. The key method here is <code>SetDefaultValues()</code>. It iterates through the task parameters (the public properties marked with <code>DefaultValueAttribute</code> attribute) of the class it is defined in and checks whether the value carried by the <code>DefaultValueAttribute</code> should be set as a true value of the task parameter. It is quite simple: if the <code>XmlNode</code> of the NAnt task definition doesn't contain the parameter in question, it means a developer didn't set it explicitly, and it is necessary to set a default value. Moreover, if the task parameter is marked as <code>Required</code> and has a default value at the same time, this situation is treated as not appropriate and the exception is thrown.</p>
<p>Obviously, when a custom NAnt task derives from the <code>DefaultValueAwareTask</code>, it has to call <code>base.ExecuteTask()</code> at the very start of its <code>ExecuteTask()</code> method implementation for this technique to work.</p>
<p>When you create custom <a href="http://nant.sourceforge.net/">NAnt</a> tasks, you can specify various task parameter characteristics, such as whether it is a required attribute, how it validates its value, etc. This is done via the custom attributes in .NET, for example:</p>http://sklyarenko.net/posts/generate-solution-file-for-number-of-cGenerate a solution file for a number of C# projects files in a folder2012-07-06T14:59:00Z<p>Some time ago I wrote my first T4 template which generates a solution (<code>*.sln</code>) file out of a number of C# project (<code>*.cspoj</code>) files, located in a folder and all descendants. Although it turned out not to be necessary to solve the task I was working on, and assuming it’s quite simple, I still decided to share it for further reference. May someone can find it useful. So, below is the entire T4 template, with no extra comments:</p>
<pre><code class="language-csharp">Microsoft Visual Studio Solution File, Format Version 11.00
# Visual Studio 2010
<#@ template language="cs" hostspecific="false" #>
<#@ output extension=".sln" #>
<#@ parameter name="Folder" type="System.String" #>
<#@ assembly name="System.Core" #>
<#@ assembly name="System.Xml" #>
<#@ assembly name="System.Xml.Linq" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Xml.Linq" #>
<#
if (Directory.Exists(Folder))
{
var csprojFiles= Directory.GetFiles(Folder, "*.csproj", SearchOption.AllDirectories);
foreach (var file in csprojFiles)
{
ProjectFileMetaData metaData = new ProjectFileMetaData(file, Folder);
WriteLine("Project(\"{3}\") = \"{0}\", \"{1}\", \"{2}\"", metaData.Name, metaData.Path, metaData.Id, ProjectFileMetaData.ProjectTypeGuid);
WriteLine("EndProject");
}
}
#>
<#
public class ProjectFileMetaData
{
public static string ProjectTypeGuid = "{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}";
public ProjectFileMetaData(string file, string root)
{
InitProperties(file, root);
}
public string Name { get; set; }
public string Path { get; set; }
public string Id { get; set; }
private void InitProperties(string file, string root)
{
XDocument xDoc = XDocument.Load(file);
XNamespace ns = @"http://schemas.microsoft.com/developer/msbuild/2003";
XElement xElement = xDoc.Root.Elements(XName.Get("PropertyGroup", ns.NamespaceName)).First().Element(XName.Get("ProjectGuid", ns.NamespaceName));
if (xElement != null)
{
this.Id = xElement.Value;
}
this.Path = file.Substring(root.Length).TrimStart(new char[] { '\\' });
this.Name = System.IO.Path.GetFileNameWithoutExtension(file);
}
}
#>
</code></pre>
<p>Some time ago I wrote my first T4 template which generates a solution (<code>*.sln</code>) file out of a number of C# project (<code>*.cspoj</code>) files, located in a folder and all descendants. Although it turned out not to be necessary to solve the task I was working on, and assuming it’s quite simple, I still decided to share it for further reference. May someone can find it useful. So, below is the entire T4 template, with no extra comments:</p>http://sklyarenko.net/posts/simple-batch-script-to-dump-contents-ofA simple batch script to dump the contents of the folder and its subfolders recursively2012-03-02T11:13:00Z<p>This topic might seem too minor for a blog post. You can argue that it's covered by a simple call to a <a href="http://ss64.com/nt/dir.html"><code>dir /s</code></a> command. Well, that's true unless you need to perform some actions with each line in the list. In this case it could be tricky if you do not use BATCH files on a daily basis.</p>
<p>Imagine you need to dump the file paths in a folder and its subfolders to a plain list. Besides, you'd like to replace the absolute path prefix with UNC share prefix, because each path contains a shared folder and each file will be accessible from inside the network. So, here goes the script:</p>
<pre><code class="language-BAT">@echo off
set _from=*repo
set _to=\\server\repo
FOR /F "tokens=*" %%G IN ('dir /s /b /a:-D /o:-D') DO (CALL :replace %%G) >> files.txt
GOTO :eof
:replace
set _str=%1
call set _result=%%_str:%_from%=%_to%%%
echo %_result%
GOTO :eof
</code></pre>
<p>Let’s start from the <a href="http://ss64.com/nt/for_cmd.html"><code>FOR</code></a> loop. This version of the command loops through the output of another command, in this case, dir. Essentially, we ask dir to run recursively (<code>/s</code>), ignore directories (<code>/a:-D</code>), sort by date/time, newest first (<code>/o:-D</code>) and output just the basic information (<code>/b</code>). And the <code>FOR</code> command works on top of this, iterating all lines of dir output (<code>tokens=*</code>), calling a subroutine <code>:replace</code> for each line and streaming the final result into <em>files.txt</em>.</p>
<p>The subroutine does a very simple thing – it replaces one part of the string with another. Let's step through it anyway. First, it gets the input parameter (<code>%1</code>) and saves it into <code>_str</code> variable. I suppose <code>%1</code> could be used as is in the expression below, but the number of <code>%</code> signs drives me crazy even without it. The next line is the most important – it does the <a href="http://ss64.com/nt/syntax-replace.html">actual replacement job</a>. I'll try to explain all these <code>%</code> signs: the variable inside the expression must be wrapped with <code>%</code> (like <code>_from</code> and <code>_to</code>); the expression itself should go between <code>%</code> and <code>%</code> as if it's a variable itself. And the outermost pair of <code>%</code> is there for escaping purpose, I suppose – you will avoid it if you use just string literals for tokens in expression. Note also the usage of the <a href="http://ss64.com/nt/call.html"><code>CALL SET</code> statement</a>. Finally, the last line of the subroutine echoes the result.</p>
<p>There's one last point worth attention. The <code>_from</code> variable, which represents the token to replace, contains a <code>*</code> sign. It means <em>replace 'repo' and everything before it</em> in the replace expression.</p>
<p>The best resource I found on the topic is <a href="http://ss64.com/nt/">SS64</a>.</p>
<p>This topic might seem too minor for a blog post. You can argue that it's covered by a simple call to a <a href="http://ss64.com/nt/dir.html"><code>dir /s</code></a> command. Well, that's true unless you need to perform some actions with each line in the list. In this case it could be tricky if you do not use BATCH files on a daily basis.</p>http://sklyarenko.net/posts/revisited-multiple-instance-and-patchesRevisited: Multiple Instance installations and patches2011-09-14T23:00:00Z<p>I initially <a href="./multiple-instance-installations-and-patches">blogged about multiple instance installations</a> couple of years ago. The way I described it worked fine for me, but the time flies and the things has changed ever since – WiX grew up to even more solid toolset, and I also gained some knowledge. So, this post is to revisit the topic and look at it through the prism of WiX 3.6.</p>
<p>Imagine you have an application, and you'd like to be able to install several instances of it side-by-side on a single machine. The starting point is still to author the <a href="http://wix.sourceforge.net/manual-wix3/wix_xsd_instancetransforms.htm"><code>InstanceTransforms</code></a> element:</p>
<pre><code class="language-XML"><InstanceTransforms Property="INSTANCEID">
<Instance Id="I01" ProductCode="{GUIDGOES-HERE-4731-8DAA-9E843A03D482}" ProductName="My Product 01"/>
<Instance Id="I02" ProductCode="{GUIDGOES-HERE-4f1a-9E88-874745E9224C}" ProductName="My Product 02"/>
<Instance Id="I03" ProductCode="{GUIDGOES-HERE-5494-843B-BC07BBC022DB}" ProductName="My Product 03"/>
...
</InstanceTransforms>
</code></pre>
<p>Obviously, the number of <code>Instance</code> elements is the number of instances supported by this installation program (plus the default one). In order to install the default instance, you should run the following command (assuming the generated MSI package is called MultiInstance.msi):</p>
<pre><code class="language-BAT">msiexec /i MultiInstance.msi
</code></pre>
<p>In order to start the installation of another instance, change the command as follows:</p>
<pre><code class="language-BAT">msiexec /i MultiInstance.msi MSINEWINSTANCE=1 TRANSFORMS=":I01"
</code></pre>
<p>The <a href="http://msdn.microsoft.com/en-us/library/aa370326.aspx"><code>MSINEWINSTANCE</code></a> property set to 1 instructs msiexec to start the installation of another instance instead of default one. Note that in the above example we installing the instance <code>I01</code>. The <code>Instance</code> element results into an <a href="http://msdn.microsoft.com/en-us/library/aa369528.aspx">instance transform</a> being embedded into the MSI package, and by setting <a href="http://msdn.microsoft.com/en-us/library/aa372085.aspx"><code>TRANSFORMS</code></a> property to <code>:I01</code> we instruct msiexec to apply the embedded instance transform which corresponds to the <code>I01</code> instance. The <code>TRANSFORMS</code> property can contain other transforms (for instance, language transforms), but that's another topic.</p>
<p>Uninstalling looks quite similar, for instance, default instance uninstallation:</p>
<pre><code class="language-BAT">msiexec /x MultiInstance.msi
</code></pre>
<p>In order to uninstall the extra instance, you should explicitly specify its ProductCode. So, for instance <code>I01</code> the uninstall command line looks like this:</p>
<pre><code class="language-BAT">msiexec /x {GUIDGOES-HERE-4731-8DAA-9E843A03D482}
</code></pre>
<p>So far, so good – it is quite straight-forward. Now, let’s turn to the <a href="http://msdn.microsoft.com/en-us/library/aa367797.aspx">Windows Installer documentation about multiple instances</a> one more time. Apart from the requirement for each instance to have a unique product code and instance identifier (this is what WiX does for free with <code>InstanceTransforms</code> technique), it strongly recommends to keep the data isolated. For the file data, this means installing the files of each instance to a different location – the path containing instance ID as its part fits best. For the non-file data, it’s a bit more complex: the appropriate components should have different GUIDs, and again install to a different location.</p>
<p>In <a href="./multiple-instance-installations-and-patches">my first attempt to approach the problem</a>, I’ve applied a workaround: generate new GUIDs for each component of new instance, embed those <em>component transforms</em> into the resulting MSI and apply along with the instance transform. Well, sounds not very efficient, but assuming a great number of components harvested automatically, this was simple enough. Fortunately, wise developers of WiX team thought this through and came up with a far more elegant solution in version 3.6.</p>
<p>Starting from <a href="http://wix.sourceforge.net/releases/3.6.1502.0/">WiX 3.6.1502.0</a>, a <a href="http://wix.sourceforge.net/manual-wix3/wix_xsd_component.htm">Component</a> element has an attribute <code>MultiInstance</code> of <code>YesNo</code> type. According to the WiX docs, <em>"If this attribute is set to <code>yes</code>, a new <code>Component/@Guid</code> will be generated for each instance transform.</em>" Fantastic! That's what we need! Let's see how it affects the multiple instance installations on a sample. Let's say our installation program consists of the following components, and we'd like to be able to install this software at least 3 times:</p>
<pre><code class="language-XML"><Directory Id="ProductNameFolder" Name="TestName">
<Component Id="FileComponent" Guid="{GUIDGOES-HERE-4301-95D2-86A4C80EF5F0}">
<File Id="dll" Source="$(var.Source)\Some.Test.dll" KeyPath="yes" />
</Component>
<Component Id="ConfigComponent" Guid="{GUIDGOES-HERE-4c2f-BE74-CF78D2350E48}">
<File Id="web_config" Source="$(var.Source)\web.config" KeyPath="yes" />
</Component>
<Directory Id="EmptyFolderDir" Name="EmptyFolder">
<Component Id="FolderComponent" Guid="{GUIDGOES-HERE-4543-A9F8-17491670D3A6}">
<CreateFolder />
</Component>
</Directory>
<Component Id="RegistryComponent" Guid="{GUIDGOES-HERE-45e5-ABFD-07E5CC4D7BC9}">
<RegistryKey Id="MainRegKey" Action="createAndRemoveOnUninstall" Root="HKLM" Key="SOFTWARE\MultiInstanceTest\[ProductCode]">
<RegistryValue Id="MainRegValue" Name="InstanceId" Value="[INSTANCEID]" Type="string" />
<RegistryValue Id="InstallPathValue" Name="Location" Value="[ProductNameFolder]" Type="string" />
<RegistryValue Id="ProductCodeValue" Name="ProductCode" Value="[ProductCode]" Type="string" />
<RegistryValue Id="ProductNameValue" Name="ProductName" Value="[ProductName]" Type="string" />
<RegistryValue Id="ProductVersionValue" Name="ProductVersion" Value="[ProductVersion]" Type="string" />
</RegistryKey>
</Component>
</Directory>
</code></pre>
<pre><code class="language-XML"><InstanceTransforms Property="INSTANCEID">
<Instance Id="I01" ProductCode="{GUIDGOES-HERE-4731-8DAA-9E843A03D482}" ProductName="My Product 01"/>
<Instance Id="I02" ProductCode="{GUIDGOES-HERE-4f1a-9E88-874745E9224C}" ProductName="My Product 02"/>
</InstanceTransforms>
</code></pre>
<p>The <a href="http://msdn.microsoft.com/en-us/library/aa367797.aspx">MSDN recommendations about multiple instances</a> are followed, except for <em>keeping non-file data isolated</em>. Let's see how it affects the install/uninstall. Run the installation of the default and <code>I01</code> instance as described above. Both instances are installed to the different locations correctly:</p>
<p><img src="./images/20110914_Instance00installed.png" class="img-fluid" alt="Instance 00 installed" title="Installed" /></p>
<p><img src="./images/20110914_Instance00RegInstalled.png" class="img-fluid" alt="Instance 00 Registry Installed" title="Registry Installed" /></p>
<p><img src="./images/20110914_Instance01installed.png" class="img-fluid" alt="Instance 01 installed" title="Installed" /></p>
<p><img src="./images/20110914_Instance01RegInstalled.png" class="img-fluid" alt="Instance 01 Registry Installed" title="Registry Installed" /></p>
<p>Now uninstall the default instance – you’ll see that non-file data was not removed properly:</p>
<p><img src="./images/20110914_Instance00broken.png" class="img-fluid" alt="Instance 00 Broken" title="Broken" /></p>
<p><img src="./images/20110914_Instance00RegBroken.png" class="img-fluid" alt="Instance 00 Registry Broken" title="Registry Broken" /></p>
<p>This is happening because the components which hold this data are considered shared by the Windows Installer, and during uninstallation of one instance it detects that there's another one pointing to the same components and leaves those untouched. Now if you uninstall the other instance, it successfully removes both <code>EmptyFolder</code> and registry key, but as a result we'll still have orphaned resources of the first instance.</p>
<p>That's the initial problem, and let's see how elegant new WiX feature deals with it. You should only add the <code>MultiInstance='yes'</code> attribute to the components holding non-file data, and forget about the problem of orphaned resources forever. Like this:</p>
<pre><code class="language-XML"><Directory Id="ProductNameFolder" Name="TestName">
<Component Id="FileComponent" Guid="{GUIDGOES-HERE-4301-95D2-86A4C80EF5F0}">
<File Id="dll" Source="$(var.Source)\Some.Test.dll" KeyPath="yes" />
</Component>
<Component Id="ConfigComponent" Guid="{GUIDGOES-HERE-4c2f-BE74-CF78D2350E48}">
<File Id="web_config" Source="$(var.Source)\web.config" KeyPath="yes" />
</Component>
<Directory Id="EmptyFolderDir" Name="EmptyFolder">
<Component Id="FolderComponent" Guid="{GUIDGOES-HERE-4543-A9F8-17491670D3A6}" MultiInstance="yes">
<CreateFolder />
</Component>
</Directory>
<Component Id="RegistryComponent" Guid="{GUIDGOES-HERE-45e5-ABFD-07E5CC4D7BC9}" MultiInstance="yes">
<RegistryKey Id="MainRegKey" Action="createAndRemoveOnUninstall" Root="HKLM" Key="SOFTWARE\MultiInstanceTest\[ProductCode]">
<RegistryValue Id="MainRegValue" Name="InstanceId" Value="[INSTANCEID]" Type="string" />
<RegistryValue Id="InstallPathValue" Name="Location" Value="[ProductNameFolder]" Type="string" />
<RegistryValue Id="ProductCodeValue" Name="ProductCode" Value="[ProductCode]" Type="string" />
<RegistryValue Id="ProductNameValue" Name="ProductName" Value="[ProductName]" Type="string" />
<RegistryValue Id="ProductVersionValue" Name="ProductVersion" Value="[ProductVersion]" Type="string" />
</RegistryKey>
</Component>
</Directory>
</code></pre>
<p>Now check the above scenario once again: install 2 instances and uninstall them. You’ll see that both install correctly and uninstall clearly. Isn’t it GREAT?! :)</p>
<p>Now, let’s turn to patching. Again, if we look back to <a href="./multiple-instance-installations-and-patches">my initial post on this topic</a>, I was using an ugly method to make the patch applicable for all instances of the installed product. That method assumed opening the binary patch for read/write and rude injection into its structure. Though it worked, there's much more elegant way of doing this. I'd like to thank <a href="http://blogs.msdn.com/b/heaths/">Heath Stewart</a> for the hint – here's the <a href="http://www.mail-archive.com/wix-users@lists.sourceforge.net/msg27696.html">full thread on wix-users mailing list</a>.</p>
<p>So, the default behavior is the following: if you author the <a href="http://wix.sourceforge.net/manual-wix3/wix_xsd_patchbaseline.htm"><code>PatchBaseline</code></a> element with its default validation settings, the patch will be applicable to the default instance only. That's because it tracks the <code>ProductCode</code> is the product baseline it was built against, and checks it during install time. The trick is to add a <a href="http://wix.sourceforge.net/manual-wix3/wix_xsd_validate.htm">Validate</a> child to the <code>PatchBaseline</code>, and instruct it not to check the <code>ProductCode</code>:</p>
<pre><code class="language-XML"><Media Id="5000" Cabinet="RTM.cab">
<PatchBaseline Id="RTM">
<Validate ProductId="no" />
</PatchBaseline>
</Media>
</code></pre>
<p>So, after you build this patch, you'll be able to apply it to a particular instance:</p>
<pre><code class="language-BAT">msiexec /i {GUIDGOES-HERE-4412-9BC2-17DAFFB00D20} PATCH=patch.msp /l*v patch.log
</code></pre>
<p>Or to all the installed instances at once (so-called <em>double-click scenario</em>):</p>
<pre><code class="language-BAT">msiexec.exe /p patch.msp /l*vx patch.log
</code></pre>
<p>There's still one more obvious inconvenience in the patch authoring, as for me. You have to specify the <code>ProductCode</code> entries twice: in the main installation sources (<code>InstanceTransform/@ProductCode</code>) and in the patch sources (<code>TargetProductCode/@Id</code>). It would be just fantastic if during patch building the WiX tools could look into the instance transforms collection of the baseline package and take the list of product codes out of there. That would omit the necessity to always specify the following section in the patch:</p>
<pre><code class="language-XML"><TargetProductCodes Replace="no">
<TargetProductCode Id="{GUIDGOES-HERE-4412-9BC2-17DAFFB00D20}" />
<TargetProductCode Id="{GUIDGOES-HERE-4731-8DAA-9E843A03D482}" />
<TargetProductCode Id="{GUIDGOES-HERE-4f1a-9E88-874745E9224C}" />
</TargetProductCodes>
</code></pre>
<p>As usual, WiX Toolset developers have done and keep doing fantastic job making our lives as setup developers easier!</p>
<p>Feel free to leave a comment in case you have a note or a question. Feedback is welcome, as usual!</p>
<p>I initially <a href="./multiple-instance-installations-and-patches">blogged about multiple instance installations</a> couple of years ago. The way I described it worked fine for me, but the time flies and the things has changed ever since – WiX grew up to even more solid toolset, and I also gained some knowledge. So, this post is to revisit the topic and look at it through the prism of WiX 3.6.</p>http://sklyarenko.net/posts/moving-to-dotnetinstaller-odd-basic-uiMoving to dotNetInstaller: the odd Basic UI2011-02-24T18:36:00Z<p>In the <a href="./moving-to-dotnetinstaller-launch">previous post</a>, I've outlined how to emulate the launch conditions behavior in dotNetInstaller. In that article I have also emphasized the importance of turning the UI into the Basic mode. It is necessary in order to avoid extra dialogs which require user interaction. If you followed the scenario I described, you might notice a strange behavior of the <code>BasicUI</code> mode: <strong>the message boxes disappear without any user participation</strong>. I thought it's be a kind of a bug, but it was done on purpose. Take a look at this code (taken from dotNetInstaller sources):</p>
<pre><code class="language-csharp">int DniMessageBox::Show(const std::wstring& p_lpszText, UINT p_nType /*=MB_OK*/, UINT p_nDefaultResult /*=MB_OK*/, UINT p_nIDHelp /*=0*/)
{
int result = p_nDefaultResult;
switch(InstallUILevelSetting::Instance->GetUILevel())
{
// basic UI, dialogs appear and disappea
case InstallUILevelBasic:
{
g_hHook = SetWindowsHookEx(WH_CBT, CBTProc, NULL, GetCurrentThreadId());
CHECK_WIN32_BOOL(NULL != g_hHook, L"Error setting CBT hook");
result = AfxMessageBox(p_lpszText.c_str(), p_nType, p_nIDHelp);
CHECK_BOOL(0 != result, L"Not enough memory to display the message box.");
if (result == 0xFFFFFF) result = p_nDefaultResult;
}
break;
// silent, no UI
case InstallUILevelSilent:
result = p_nDefaultResult;
break;
// full UI
case InstallUILevelFull:
default:
result = AfxMessageBox(p_lpszText.c_str(), p_nType, p_nIDHelp);
break;
}
return result;
}
</code></pre>
<p>So, as you can see, in Basic mode is shows the message box, and after some time (if you didn't catch the moment to press any button), it automatically emulates the pressing of default choice button. I was quite surprised when I understood it was designed to work like this – that's because I've never seen such a UI behavior…</p>
<p>But, anyway, I suspect that a user would like to know why the installation terminated - a certain prerequisite is not installed. As long as the mentioned behavior is hard-coded, the only option is to create a custom build of dotNetInstaller. It's obvious that the fix is trivial here – make the case for <code>InstallUILevelBasic</code> go the same branch as <code>InstallUILevelFull</code>, that is, just show the message box. Next step is to build the solution – see <em>Contributing to Source Code</em> chapter of dotNetInstaller.chm for instructions how to build.</p>
<p>Finally, install the custom build instead of the official one and make sure your setup project picks the changes up. That's it!</p>
<p>As usual, I would appreciate any comments and notes!</p>
<p>In the <a href="./moving-to-dotnetinstaller-launch">previous post</a>, I've outlined how to emulate the launch conditions behavior in dotNetInstaller. In that article I have also emphasized the importance of turning the UI into the Basic mode. It is necessary in order to avoid extra dialogs which require user interaction. If you followed the scenario I described, you might notice a strange behavior of the <code>BasicUI</code> mode: <strong>the message boxes disappear without any user participation</strong>. I thought it's be a kind of a bug, but it was done on purpose. Take a look at this code (taken from dotNetInstaller sources):</p>http://sklyarenko.net/posts/moving-to-dotnetinstaller-launchMoving to dotNetInstaller: launch conditions2011-02-18T15:59:00Z<p>In the <a href="./moving-to-dotnetinstaller-simplest-case">previous post</a> I've described how to implement the simplest use case of a bootstrapper: create a single EXE file and run the actual installation after extraction. Today I'd like to go further and illustrate more production-like situation.</p>
<p>Ok, imagine that you'd like to add some checks to your installation package, and run the actual installation only if all those checks pass. This scenario has its own term: adding launch conditions. Launch condition is basically a statement which evaluates to either true, or false. In case it's false, and the check is critical for the further installation, you terminate the installation process, as a rule. Otherwise, you let it do the job.</p>
<p>The <a href="http://dotnetinstaller.codeplex.com/">dotNetInstaller</a> has a conception called Installed Checks. It can check various areas, like system registry, files or directories. It is only allowed to place installed checks under components. In the <a href="./moving-to-dotnetinstaller-simplest-case">simplest scenario</a> we avoided using components, relying just on the install complete command. Components refer to separate independent parts of your installation package. There are various types of components – dotNetInstaller help file explains them all pretty good. So, my first guess was to add a single component of type <code>exe</code>, move my embedded files there and add a number of installed checks to it for various prerequisites I require. Something like this:</p>
<p><img src="./images/20110218_DNI_prerequisite_wrong.png" class="img-fluid" alt="dotNetInstaller Prerequisite Wrong" title="Prerequisite Wrong" /></p>
<p>But my assumption was not correct. The trick is that installed check (or a combination of those) placed under a component defines <strong>if this very component is installed</strong>. In other words, the most <em>supported</em> use case of dotNetInstaller is when you add all the components you need into your final package, and each of them verifies its own presence on the target machine. As a result of such verification, a component decides whether to install or not.</p>
<p>A quick search on <a href="http://codeplex.com/">codeplex.com</a> discussions gave me a link to the <a href="http://dotnetinstaller.codeplex.com/workitem/6387">appropriate feature request</a>, which proved my assumption it's not supported out of the box today. However, there is a workaround.</p>
<p>For each of the launch conditions a separate component should be declared. The trick is such components won't actually install anything, so we'll call them <em>fake</em> components. A component has a property called <code>failed_exec_command_continue</code>. It contains a message to be shown to the user in case a component failed to install, so put the appropriate message there, for instance, <em>.NET 3.5 SP1 is not installed. The installation program will terminate</em>. Make sure that both <code>allow_continue_on_error</code> and <code>default_continue_on_error</code> are set to <code>False</code> – otherwise a user will be presented with a prompt box, instead of a simple message box. Finally, put non-existing executable to the <em>executable</em> property, e.g. <code>fake.exe</code>. Now it's time to add a required number and combination of installed checks to this fake component, which will actually do the job. Here's what we get at the end of this shaman dancing:</p>
<p><img src="./images/20110218_DNI_prerequisite_right.png" class="img-fluid" alt="dotNetInstaller Prerequisite Right" title="Prerequisite Right" /></p>
<p>So, how does this work? The dotNetInstaller starts the installation from the .NET (3.5 SP1) component and the first thing it evaluates the installed checks. If the evaluation succeeds, in our sample this means that the .NET 3.5 SP1 is present on the target machine. In terms of dotNetInstaller, this means that a component we called <code>.NET (3.5 SP1)</code> is installed and we do not trigger its installation. Otherwise, if the evaluation fails, this means that the component is not present and dotNetInstaller starts its installation. It will try to call <code>fake.exe</code>, which <strong>does not exist</strong>, and will show a message. As long as we forbad the rest of the installation to continue, it will terminate. Exactly what we need!</p>
<p>Note however, that the described behavior looks that good <strong>in Basic UI mode</strong>. The error of failed component is just logged to the log file, and no more annoying dialogs are displayed.</p>
<p>If you try this out, you'll notice one strange little thing with message boxes. In the next blog post I'll tell you what it is, and how to handle it. And this will be the end of the trilogy. :-)</p>
<p>In the <a href="./moving-to-dotnetinstaller-simplest-case">previous post</a> I've described how to implement the simplest use case of a bootstrapper: create a single EXE file and run the actual installation after extraction. Today I'd like to go further and illustrate more production-like situation.</p>http://sklyarenko.net/posts/moving-to-dotnetinstaller-simplest-caseMoving to dotNetInstaller: the simplest case2011-01-27T17:22:00Z<p>I've been playing with one of the most popular <a href="http://wix.mindcapers.com/wiki/Bootstrapper">bootstrapper</a> applications available as free and open source tool – <a href="http://http//dotnetinstaller.codeplex.com/">dotNetInstaller</a>. On one hand, it turns out to be quite a powerful and feature-rich tool. But on the other, some things seem not intuitive to me, and there are still limitations. This post opens the series of (at least, two) posts about dotNetInstaller and my own experience with it.</p>
<p>Ok, imagine you need to do a very simple thing: wrap your installation program resources into a single EXE file, let it extract necessary files to somewhere under <code>%TEMP%</code>, run the installation UI wizard and finally drop extracted files when the installation is done.</p>
<p>You should start by installing dotNetInstaller (I used the <a href="http://dotnetinstaller.codeplex.com/releases/view/50143">most recent 2.0 version</a>). One of the executables being installed is InstallerEditor.exe. It is a kind of IDE (smart editor) for dotNetInstaller project files, which are called configurations. The information about your project is stored as XML, that is easily DIFF-able and MERGE-able.</p>
<p>So, run InstallerEditor, and select <strong>File</strong> > <strong>New</strong> – the new empty config file will be created. The first thing I suggest to do is to enable logging – it is a property of config file you’ve just created. Next, right click the root (and so far the only) node in the left pane, and select <strong>Add</strong> > <strong>Configurations</strong> > <strong>Setup Configuration</strong>. Actually, this is the only type of entities you can add under config file node. Besides, at this level you can set the UI level for your bootstrapper. According to our task definition, 'basic' is just enough. By now, you should end up with something like this:</p>
<p><img src="./images/20110127_DNI_initial_config.png" class="img-fluid" alt="dotNetInstaller Initial Configuration" title="Initial Configuration" /></p>
<p>Setup configuration serves as a root for various entities: embedded files, installation components, UI controls, etc. However, our requirements for the simplest scenario doesn't require most of it. Usually configuration consists of a number of components, but again, we won’t add them for now.</p>
<p>In order to include installation files into our bootstrapper, right-click <code>install:</code> node and select <strong>Add</strong> > <strong>Embed</strong> > <strong>Embed Folder</strong>. Now fill the properties for this embedded folder. Fortunately, those are just two – <code>sourcefolderpath</code> and <code>targetfolderpath</code>. Place the value <code>#APPPATH</code> to the first one and any value to the second. <code>#APPPATH</code> is one of the several variable substitutions offered by dotNetInstaller out-of-the-box and basically means that installation files will be picked either from the current folder, or from the one you specify in the <code>/a</code> switch of the linker. The <code>targetfolderpath</code> can logically be left empty, because it sets the name of the subfolder under system temp location to extracts the files to. But it is designed to be required, so feel free to paste anything here, for instance, <code>exe</code>. Ok, so now we are at this point:</p>
<p><img src="./images/20110127_DNI_embed_folder.png" class="img-fluid" alt="dotNetInstaller Embed Folder" title="Embed Folder" /></p>
<p>The installation wizard to run is also among those files we embedded, of course. So, in order to run it after the extraction is done we should fill in the <code>complete_command</code> property of the configuration. For this, select <code>install:</code> node and find the set of properties prefixed with <code>complete_command</code>. As you can see, the configuration entity has lots of properties to configure and is quite flexible. The <code>complete_command</code> should store the command line to run on successful installation complete. You can specify different values for each of 3 UI modes: <code>full</code>, <code>basic</code> and <code>silent</code>. Actually, if <code>basic</code> or <code>silent</code> are not specified, it will fall back to just <code>complete_command</code>.</p>
<p>Besides, we'd like to show CAB extraction dialog. This is especially useful when the files are large and it takes some time to extract. Set <code>show_cab_dialog</code> to <code>true</code>. Optionally, customize other properties of the CAB extraction dialog, like <code>Caption</code> and <code>Message</code>. So, summarizing these two paragraphs, we now have the following configuration:</p>
<p><img src="./images/20110127_DNI_complete_command.png" class="img-fluid" alt="dotNetInstaller Complete Command" title="Complete Command" /></p>
<p>Pay attention to <code>cab_path</code> property. In this form it basically means: take system <code>%TEMP%</code> location, and create a subfolder in it named as random GUID. This guaranties the uniqueness of the extract target location and you would not probably ever want to change it. Now, this magic location can be referenced as <code>#CABPATH</code> by other properties. For isntance, this is what we have done for <code>complete_command</code>. The values says: go to the folder the files were just extracted to, go down to its <code>exe</code> subfolder (remember <code>targetfolderpath</code>?) and run <code>InstallWizard.exe</code>.</p>
<p>And finally, some more details. Make sure <code>auto_start</code>, <code>wait_for_complete_command</code> and <code>cab_path_autodelete</code> are all set to <code>true</code>. Obviously, this will instruct our bootstrapper to start automatically, and auto delete the extracted files after the complete command completes.</p>
<h2 id="linking-and-running">Linking and running</h2>
<p>Before building the project, you can run it with dotNetInstaller.exe to see the UI. Just run <code>dotNetInstaller.exe /ConfigFile configuration.xml</code>. But <span style="color:red"><strong>this won't embed any files</strong></span>. As a result, <span style="color:red"><strong>you'll be able to check only UI</strong></span> (which is obviously not the point for our case). <span style="color:red"><strong>All settings which rely on embedded files will fail.</strong></span></p>
<p>Instead, we'll link the sources into final <code>setup.exe</code>. The following command does the job:</p>
<pre><code class="language-BAT">InstallerLinker.exe /o:setup.exe /t:dotNetInstaller.exe /c:install_config.xml /i:my.ico /a:source /v+
</code></pre>
<p>Here, <code>/o:</code> stands for output file name, <code>/t:</code> is a template of EXE file to make like – be sure to always set it to dotNetInstaller.exe, <code>/c:</code> is a path to the configuration file we have been editing all this time, <code>/i:</code> is obviously a path to the icon to use as an application icon for setup.exe, <code>/a:</code> is a path to the installation files to embed, and finally, <code>/v+</code> turns the verbose logging on. In case there are no errors, you'll see the following output:</p>
<p><img src="./images/20110127_DNI_linker_output.png" class="img-fluid" alt="dotNetInstaller Linker Output" title="Linker Output" /></p>
<p>Now you have setup.exe, which extracts your installation files (showing the progress), and starts your main InstallWizard.exe in case of successful extraction.</p>
<p>That's it! As usual, your comments and notes are welcome.</p>
<p>I've been playing with one of the most popular <a href="http://wix.mindcapers.com/wiki/Bootstrapper">bootstrapper</a> applications available as free and open source tool – <a href="http://http//dotnetinstaller.codeplex.com/">dotNetInstaller</a>. On one hand, it turns out to be quite a powerful and feature-rich tool. But on the other, some things seem not intuitive to me, and there are still limitations. This post opens the series of (at least, two) posts about dotNetInstaller and my own experience with it.</p>http://sklyarenko.net/posts/back-to-basics-versioned-unversionedBack to basics: Versioned, Unversioned and Shared fields2010-09-10T02:23:00Z<p>It is well-known that each field of a template can be versioned (default option), unversioned or shared. The Template Builder UI exposes the <code>Unversioned</code> and <code>Shared</code> properties as two independent checkboxes. And thus, despite it's a very basic Sitecore concept, it is sometimes asked <a href="http://sdn.sitecore.net/forum//ShowPost.aspx?PostID=29034">what’s the point of marking a field both shared and unversioned</a>. The answer is "a field marked both shared and unversioned is still a shared field". Think about "shared" as a superset of "unversioned" – the field can't be shared (between all versions of all languages) without being unversioned (between all versions of one language).</p>
<p>Let’s see how it works under the hood when the field "sharing" level is changed. Let's create a simple template with just a single field. We’ll keep the defaults so far (versioned). Now create a content item based on this template and fill in the field.</p>
<p>Sitecore fields are stored in three different tables inside the database: <code>VersionedFields</code>, <code>UnversionedFields</code> and <code>SharedFields</code>. The names are quite self-explanatory. Let's run the following SQL query:</p>
<pre><code class="language-SQL">SELECT * FROM VersionedFields WHERE FieldId = '{GUID-GOES-HERE-...}'
</code></pre>
<p>As a result, one record is returned – the field information of the item we’ve just created is stored in the VersionedFields table. The similar queries for UnversionedFields and SharedFields give 0 records.</p>
<p>Now change the field to be Unversioned and run all 3 queries again – it will return 1 record for UnversionedFields table and 0 for others. Change the field to be both Shared and Unversioned and repeat the experiment – the field info now resides in SharedFields table. Now if you uncheck Unversioned and leave it just Shared, it will still show 1 record for SharedFields table and 0 for others. So, here’s the evidence!</p>
<p>NOTE: changing the field “sharing” level might result in a data loss (similar to type cast operation in C#), and Sitecore warns you about it.</p>
<p>You might think that two checkboxes are to be blamed for this confusion. Check out the hot VS extension called <a href="http://visualstudiogallery.msdn.microsoft.com/en-us/44a26c88-83a7-46f6-903c-5c59bcd3d35b/view">Sitecore Rocks</a> – a brand new tool (CTP for now) for developers working with Sitecore projects in VS 2010. It seems to look more natural in this way, isn’t it?</p>
<p><img src="./images/201009_SCRocks.png" class="img-fluid" alt="Sitecore Rocks" title="Sitecore Rocks" /></p>
<p>It is well-known that each field of a template can be versioned (default option), unversioned or shared. The Template Builder UI exposes the <code>Unversioned</code> and <code>Shared</code> properties as two independent checkboxes. And thus, despite it's a very basic Sitecore concept, it is sometimes asked <a href="http://sdn.sitecore.net/forum//ShowPost.aspx?PostID=29034">what’s the point of marking a field both shared and unversioned</a>. The answer is "a field marked both shared and unversioned is still a shared field". Think about "shared" as a superset of "unversioned" – the field can't be shared (between all versions of all languages) without being unversioned (between all versions of one language).</p>