1 2 3 4 5 6 7 8 9 10 11
When [NotNull] is null

I prefer to be very explicit about nullability of references, wherever possible. Happily, most modern languages support this feature non-nullable references natively (e.g. TypeScript, Swift, Rust, Kotlin).

As of version 8, C# also supports non-nullable references, but we haven't migrated to using that enforcement yet. Instead, we've used the JetBrains nullability annotations for years.1

Recently, I ended up with code that returned a null even though R# was convinced that the value could never be null.

The following code looks like it could never produce a null value, but somehow it does.

[NotNull] // The R# checker will verify that the method does not return null
public DynamicString GetCaption()
{
  var result = GetDynamic() ?? GetString() ?? new DynamicString();
}

[CanBeNull]
private DynamicString GetDynamic() { ... }

[CanBeNull]
private string GetString() { ... }

So, here we have a method GetCaption() whose result can never be null. It calls two methods that may return null, but then ensures that its own result can never be null by creating a new object if neither of those methods produces a string. The nullability checker in ReSharper is understandably happy with this.

At runtime, though, a call to GetCaption() was returning null. How can this be?

The Culprit: An Implicit Operator

There is a bit of code missing that explains everything. A DynamicString declares implicit operators that allow the compiler to convert objects of that type to and from a string.

public class DynamicString
{
  // ...Other stuff

  [CanBeNull]
  public static implicit operator string([CanBeNull] DynamicString dynamicString) => dynamicString?.Value;
}

A DynamicString contains zero or more key/value pairs mapping a language code (e.g. "en") to a value. If the object has no translations, then it is equivalent to null when converted to a string. Therefore, a null or empty DynamicString converts to null.

If we look at the original call, the compiler does the following:

  1. The call to GetDynamic() sets the type of the expression to DynamicString.
  2. The compiler can only apply the ?? operator if both sides are of the same type; otherwise, the code is in error.
  3. Since DynamicString can be coerced to string, the compiler decides on string for the type of the first coalesced expression.
  4. The next coalesce operator (??) triggers the same logic, coercing the right half (DynamicString) to the type it has in common with the left half (string, from before).
  5. Since the type of the expression must be string in the end, even if we fall back to the new DynamicString(), it is coerced to a string and thus, null.

Essentially, what the compiler builds is:

var result = 
  (string)GetDynamic() ?? 
  GetString() ?? 
  (string)new DynamicString();

The R# nullability checker sees only that the final argument in the expression is a new expression and determines that the [NotNull] constraint has been satisfied. The compiler, on the other hand, executes the final cast to string, converting the empty DynamicString to null.

The Fix: Avoid Implicit DynamicString-to-string Conversion

To fix this issue, I avoided the ?? coalescing operator. Instead, I rewrote the code to return DynamicString wherever possible and to implicitly convert from string to DynamicString, where necessary (instead of in the other direction).

public DynamicString GetCaption()
{
  var d = GetDynamic();
  if (d != null)
  {
    return d;
  }

  var s = GetString();
  if (s != null)
  {
    return s; // Implicit conversion to DynamicString
  }

  return GetDefault();
}

Conclusion

The takeaway? Use features like implicit operators sparingly and only where absolutely necessary. A good rule of thumb is to define such operators only for structs which are values and can never be null.

I think the convenience of being able to use a DynamicString as a string outweighs the drawbacks in this case, but YMMV.



  1. Java also has @NonNull and @Nullable annotations, although it's unclear which standard you're supposed to use.

Configuring .NET Framework Assembly-binding Redirects

After years of getting incrementally better at fixing binding redirects, I've finally taken the time to document my methodology for figuring out what to put into app.config or web.config files.

The method described below works: when you get an exception because the runtime gets an unexpected version of an assembly---e.g. "The located assembly’s manifest definition does not match the assembly reference"---this technique lets you formulate a binding-redirect that will fix it. You'll then move on to the next binding issue, until you've taken care of them all and your code runs again.

Automatic Binding Redirects

If you have an executable, you can usually get Visual Studio (or MSBuild) to regenerate your binding redirects for you. Just delete them all out of the app.config or web.config and Rebuild All. You should see a warning appear that you can double-click to generate binding redirects.

If, however, this doesn't work, then you're on your own for discovering which version you actually have in your application. You need to know the version or you can't write the redirect. You can't just take any number: it has to match exactly.

Testing Assemblies

Where the automatic generation of binding redirects doesn't work is for unit-test assemblies.

My most recent experience was when I upgraded Quino-Windows to use the latest Quino-Standard. The Quino-Windows test assemblies were suddenly no longer able to load the PostgreSql driver. The Quino.Data.PostgreSql assembly targets .NET Standard 2.0. The testing assemblies in Quino-Windows target .NET Framework.

After the latest upgrade, many tests failed with the following error message:

Could not load file or assembly 'System.Runtime.CompilerServices.Unsafe, Version=4.0.4.1, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

This is the version that it was looking for. It will either be the version required by the loading assembly (npgsql in this case) or the version already specified in the app.config (that is almost certainly out of date).

Which File Was Loaded?

To find out the file version that your application actually uses, you have to figure out which assembly .NET loaded. A good first place to look is in the output folder for your executable assembly (the testing assembly in this case).

If, for whatever reason, you can't find the assembly in the output folder---or it's not clear which file is being loaded---you can tease the information out of the exception itself.

  1. From the exception settings, make sure that the debugger will stop on a System.IO.FileLoadException
  2. Debug your test
  3. The debugger should break on the exception

Click "View Details" to show the QuickWatch window for the exception. There's a property called FusionLog that contains more information.

The log is quite detailed and shows you the configuration file that was used to calculate the redirect as well as the file that it loaded.

Which Version Is It?

With the path to the assembly in hand, it's time to get the assembly version.

Showing the file properties will most likely not show you the assembly version. For third-party assemblies (e.g. Quino), the file version is often the same as the assembly version (for pre-release versions, it's not). However, Microsoft loves to use a different file version than the assembly version. That means that you have to open the assembly in a tool that can dig that version out of the assembly manifest.

The easiest way to get the version number is to use the free tool JetBrains DotPeek or use the AssemblyExplorer in JetBrains ReSharper or JetBrains Rider.

You can see the three assemblies that I had to track down in the following screenshot.

Writing Binding Redirects

Armed with the actual versions and the public key-tokens, I was ready to create the app.config file for my testing assembly.

And here it is in text/code form:

<configuration>
  <runtime>
    <assemblybinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentassembly>
        <assemblyidentity name="System.Numerics.Vectors" publicKeyToken="B03F5F7F11D50A3A" culture="neutral">
        <bindingredirect oldVersion="0.0.0.0-4.1.4.0" newVersion="4.1.4.0">
      </dependentassembly>
      <dependentassembly>
        <assemblyidentity
 name="System.Runtime.CompilerServices.Unsafe" publicKeyToken="B03F5F7F11D50A3A" culture="neutral">
        <bindingredirect oldVersion="0.0.0.0-4.0.5.0" newVersion="4.0.5.0">
      </dependentassembly>
      <dependentassembly>
        <assemblyidentity
 name="System.Threading.Tasks.Extensions" publicKeyToken="CC7B13FFCD2DDD51" culture="neutral
        ">
        <bindingredirect oldVersion="0.0.0.0-4.2.0.1" newVersion="4.2.0.1">
      </dependentassembly>
    </assemblybinding>
  </runtime>
</configuration>
Looking for Developers in 2020

2020 is shaping up to be a busy year...so we're looking for help from anyone who's got what it takes and who's interested in working on interesting projects with a great team.

Please take a minute to check out the following job descriptions.

If you are interested, please don't hesitate to apply from the pages linked above. If you know of someone who might be interested, we'd appreciate it if you could let them know that we're looking for them.

Thanks!

Improving NUnit integration with testing harnesses

imageThese days nobody who's anybody in the software-development world is writing software without tests. Just writing them doesn't help make the software better, though. You also need to be able to execute tests -- reliably and quickly and repeatably.

That said, you'll have to get yourself a test runner, which is a different tool from the compiler or the runtime. That is, just because your tests compile (satisfy all of the language rules) and could be executed doesn't mean that you're done writing them yet.

Testing framework requirements

Every testing framework has its own rules for how the test runner selects methods for execution as tests. The standard configuration options are:

  • Which classes should be considered as test fixtures?
  • Which methods are considered tests?
  • Where do parameters for these methods come from?
  • Is there startup/teardown code to execute for the test or fixture?

Each testing framework will offer different ways of configuring your code so that the test runner can find and execute setup/test/teardown code. To write NUnit tests, you decorate classes, methods and parameters with C# attributes.

The standard scenario is relatively easy to execute -- run all methods with a Test attribute in a class with a TestFixture attribute on it.

Test-runner Requirements

There are legitimate questions for which even the best specification does not provide answers.

When you consider multiple base classes and generic type arguments, each of which may also have NUnit attributes, things get a bit less clear. In that case, not only do you have to know what NUnit offers as possibilities but also whether the test runner that you're using also understands and implements the NUnit specification in the same way. Not only that, but there are legitimate questions for which even the best specification does not provide answers.

At Encodo, we use Visual Studio 2015 with ReSharper 9.2 and we use the ReSharper test runner. We're still looking into using the built-in VS test runner -- the continuous-testing integration in the editor is intriguing1 -- but it's quite weak when compared to the ReSharper one.

So, not only do we have to consider what the NUnit documentation says is possible, but we must also know what how the R# test runner interprets the NUnit attributes and what is supported.

Getting More Complicated

Where is there room for misunderstanding? A few examples,

  • What if there's a TestFixture attribute on an abstract class?
  • How about a TestFixture attribute on a class with generic parameters?
  • Ok, how about a non-abstract class with Tests but no TestFixture attribute?
  • And, finally, a non-abstract class with Tests but no TestFixture attribute, but there are non-abstract descendants that do have a TestFixture attribute?

In our case, the answer to these questions depends on which version of R# you're using. Even though it feels like you configured everything correctly and it logically should work, the test runner sometimes disagrees.

  • Sometimes it shows your tests as expected, but refuses to run them (Inconclusive FTW!)
  • Or other times, it obstinately includes generic base classes that cannot be instantiated into the session, then complains that you didn't execute them. When you try to delete them, it brings them right back on the next build. When you try to run them -- perhaps not noticing that it's those damned base classes -- then it complains that it can't instantiate them. Look of disapproval.

Throw the TeamCity test runner into the mix -- which is ostensibly the same as that from R# but still subtly different -- and you'll have even more fun.

Improving Integration with the R# Test Runner

At any rate, now that you know the general issue, I'd like to share how the ground rules we've come up with that avoid all of the issues described above. The text below comes from the issue I created for the impending release of Quino 2.

Environment

  • Windows 8.1 Enterprise
  • Visual Studio 2015
  • ReSharper 9.2

Expected behavior

Non-leaf-node base classes should never appear as nodes in test runners. A user should be able to run tests in descendants directly from a fixture or test in the base class.

Observed behavior

Non-leaf-node base classes are shown in the R# test runner in both versions 9 and 10. A user must navigate to the descendant to run a test. The user can no longer run all descendants or a single descendant directly from the test.

Analysis

Relatively recently, in order to better test a misbehaving test runner and accurately report issues to JetBrains, I standardized all tests to the same pattern:

  • Do not use abstract anywhere (the base classes don't technically need it)
  • Use the TestFixture attribute only on leaf nodes

This worked just fine with ReSharper 8.x but causes strange behavior in both R# 9.x and 10.x. We discovered recently that not only did the test runner act strangely (something that they might fix), but also that the unit-testing integration in the files themselves behaved differently when the base class is abstract (something JetBrains is unlikely to fix).

You can see that R# treats a non-abstract class with tests as a testable entity, even when it doesn't actually have a TestFixture attribute and even expects a generic type parameter in order to instantiate.

Here it's not working well in either the source file or the test runner. In the source file, you can see that it offers to run tests in a category, but not the tests from actual descendants. If you try to run or debug anything from this menu, it shows the fixture with a question-mark icon and marks any tests it manages to display as inconclusive. This is not surprising, since the test fixture may not be abstract, but does require a type parameter in order to be instantiated.

image

Here it looks and acts correctly:

image

I've reported this issue to JetBrains, but our testing structure either isn't very common or it hasn't made it to their core test cases, because neither 9 nor 10 handles them as well as the 8.x runner did.

Now that we're also using TeamCity a lot more to not only execute tests but also to collect coverage results, we'll capitulate and just change our patterns to whatever makes R#/TeamCity the happiest.

Solution

  • Make all testing base classes that include at least one {{Test}} or {{Category}} attribute {{abstract}}. Base classes that do not have any testing attributes do not need to be made abstract.

Once more to recap our ground rules for making tests:

  • Include TestFixture only on leafs (classes with no descendants)
  • You can put Category or Test attributes anywhere in the hierarchy, but need to declare the class as abstract.
  • Base classes that have no testing attributes do not need to be abstract
  • If you feel you need to execute tests in both a base class and one of its descendants, then you're probably doing something wrong. Make two descendants of the base class instead.

When you make the change, you can see the improvement immediately.

image


  1. ReSharper 10.0 also offers continuous integration, but our experiments with the EAP builds and the first RTM build left us underwhelmed and we downgraded to 9.2 until JetBrains manages to release a stable 10.x.

Azure Linked Accounts and SSH Keys

Azure DevOps allows you to link multiple accounts.

Our concrete use case was:

  • User U1 was registered with an Azure DevOps organization O1
  • Microsoft did some internal management and gave our partner account a new organization O2, complete with new accounts for all users. Now I have user U2 as well, registered with O2.
  • U2 was unable to take tests to qualify for partner benefits, so I had to use U1 but link the accounts so that those test results accrued to O2 as well as O1.
  • We want to start phasing out our users from O1, so we wanted to remove U1 from O1 and add U2

Are we clear so far? U1 and U2 are linked because reasons. U1 is old and busted; U2 is the new hotness.

The linking has an unexpected side-effect when managing SSH keys. If you have an SSH key registered with one of the linked accounts, you cannot register an SSH key with the same signature with any of the other accounts.

This is somewhat understandable (I guess), but while the error message indicates that you have a duplicate, it doesn't tell you that the duplicate is in another account. When you check the account that you're using and see no other SSH keys registered, it's more than a little confusing.

Not only that, but if the user to which you've added the SSH key has been removed from the organization, it isn't at all obvious how you're supposed to access your SSH key settings for an account that no longer has access to Azure DevOps (in order to remove the SSH key).

Instead, you're left with an orphan account that's sitting on an SSH key that you'd like to use with a different account.

So, you could create a new SSH key or you could do the following:

  • Re-add U1 to O1
  • Remove SSH key SSH1 from U1
  • Register SSH key SSH1 with U2
  • Profit

If you can't add U1 to O1 anymore, then you'll just have to generate and use a new SSH1 key for Azure. It's not an earth-shatteringly bad user experience, but interesting to see how several logical UX decisions led to a place where a couple of IT guys were confused for long minutes.

Visual Studio 2019 Survey

Visual Studio 2019 (VS) asked me this morning if I was interested in taking a survey to convey my level of satisfaction with the IDE.

VS displays the survey in an embedded window using IE11.1 I captured the screen of the first thing I saw when I agreed to take the survey.

I know it's the SurveyMonkey script that's failing, but it's still not an auspicious start.



  1. I'd just upgraded to Windows 10 build 1903, which includes IE 11.418.18362.0. I can't imagine that they didn't test this combination.

Using Git efficiently: SmartGit + BeyondCompare

I've written about using SmartGit (SG) before1 2 and I still strongly recommend that developers who manage projects use a UI for Git.

If you're just developing a single issue at a time and can branch, commit changes and make pull requests with your IDE tools, then more power to you. For this kind of limited workflow, you can get away with a limited tool-set without too big of a safety or efficiency penalty.

However, if you need an overview or need to more management, then you're going to sacrifice efficiency and possibly correctness if you use only the command line or IDE tools.

I tend to manage Git repositories, which means I'm in charge of pruning merged or obsolete branches and making sure that everything is merged. A well-rendered log view and overview of branches is indispensable for this kind of work.

SmartGit

I have been and continue to be a proponent of SmartGit for all Git-related work. It not only has a powerful and intuitive UI, it also supports pull requests, including code comments that integrate with BitBucket, GitLab and GitHub, among others.

It has a wonderful log view that I now regularly use as my standard view. It's fast and accurate (I almost never have to refresh explicitly to see changes) and I have a quick overview of the workspace, the index and recent commits. I can search for files and easily get individual logs and blame.

The file-differ has gotten a lot better and has almost achieved parity with my favorite diffing/merging tool Beyond Compare. Almost, but not quite. The difference is still significant enough to justify Beyond Compare's purchase price of $60.

What's better in Beyond Compare3?

Diffing

  • While both differs have syntax-highlighting (and the supported file-types seem to be about the same), Beyond Compare distinguishes between significant and insignificant (e.g. comments) changes. It makes it much easier to see whether code or documentation has changed.
  • The intra-line diffing in Beyond Compare is more fine-grained and tends to highlight changes better. SmartGit is catching up in this regard.
  • You can re-align a diff manually using F7. This is helpful if you moved code and want to compare two chunks that the standard diff no longer sees as being comparable

Merging

I could live without the Beyond Compare differ, but not without the merger.

  • The 4-pane view shows left, base and right above as well as the target below, with the target window being editable. Each change has its own color, so you can see afterwards whether you took left, right or made manual changes.
  • The merge view includes a line-by-line differ that shows left, base, right and target lines directly above one another, with a scrollbar for longer lines.
  • The target view is color-coded to show the origin of each line of text: right, left, base or custom edited.
  • BeyondCompare makes a smart recommendation for how to merge a given conflict that is very often exactly what you want, which means that for many conflicts, you can just confirm the recommendation.
  • SmartGit has two separate windows for base vs. left/right and right/left vs. target. Long lines are really hard to decipher/merge in SmartGit

Integrate Beyond Compare into SmartGit

To set up SmartGit to use Beyond Compare

  1. Select Tools > Diff Tools
  2. Click the "Add..." button
  3. Set File Pattern to *
  4. Select "External diff tool"
  5. Set the command to C:\Program Files (x86)\Beyond Compare 4\BCompare.exe
  6. Set the arguments to "${leftFile}" "${rightFile}"
  7. Select Tools > Conflict Solvers
  8. Select "External Conflict Solver"
  9. Set File Pattern to *
  10. Set the command to C:\Program Files (x86)\Beyond Compare 4\BCompare.exe
  11. Set the arguments to "${leftFile}" "${rightFile}" "${baseFile}" "${mergedFile}"


  1. In Git: Managing local commits and branches (December 2016) and Programming in the modern/current age (February 2013)

  2. I am in no way affiliated with SmartGit.

  3. I am in no way affiliated with BeyondCompare.

Multi-language web sites

Why are multi-language web sites so hard to make? Even large companies like Microsoft, Google and Apple regularly send content with mixed-language content.

This is probably due to several factors:

  1. Large web sites pull data for myriad sources, including CDNs and caching services. Each source needs to respect the requested language, If a source doesn't have support for a requested language, then just that piece of content will be delivered in the fallback format.
  2. Any proxies have to pass the requested language (and other headers) on to the backing server. If the backing server doesn't get the language request, then it can't respect the requested language, obviously.
  3. Any proxy that caches content has to respect the language header (as well as any other data-relevant headers) instead of just caching one copy per URL. While this is standard for commercial proxies and CDNs, it might not be the case for bespoke software.
  4. Some services might have a different context (e.g. logged-in user, detected via token in the request), who has different language settings than the requesting browser. This would mean that, while the main page content is pulled from the server with one language (e.g. en-US), the content for an embedded block might be requested as a logged-in user who has a different preferred language (e.g. de-CH). The server will likely honor the preferred language of the user account rather than the language included in the request, assuming it even even gets the language from the original request.
  5. Finally, Some companies1 are notoriously bad at multi-language software because they generally only acknowledge English and consider supporting other languages as a nice-to-have and that delivering English instead is an acceptable fallback because everyone can read English, right?

The move to cloud-based and highly cached content has increased complexity considerably. Even if a company does everything right in (1), (2), and (3) above, the realities of (4) may still lead to a page that contains content in multiple languages.

That is, each piece of software is functioning as designed but combining the output from those pieces of software leads to content that has multiple languages in it. At that point, you can either throw your hands in the air and give up...or you can start to redesign services to respect that requested language even if the user context's preferred language is different. This is not a decision you can make lightly and you run the risk of breaking the service's content in other places. Sometimes there is no right answer.

Since I live in Switzerland, which has 4 official languages, I've seen EULAs from Apple written in a combination of French, English, German and even a word or two of Italian.

The example below comes from Microsoft Edge's Tips page that they show when you start using the browser. Edge thinks that my default language is German despite the fact that my Windows is English. Microsoft tends to use the language of the region you're in (Switzerland) rather than the display language that you've expressly set, but...that's another discussion.

At any rate, Edge thinks I want German content2 but Microsoft can't even reliably deliver German content for this main page, defaulting to English content in several places.



  1. I'm looking at you, US companies.

  2. I quickly checked the settings and could not find out how to change the list of languages I'd like to include in my browser requests. Other browsers do provide a list of accepted languages, but Edge's settings are quite limited.

How to use Authenticated NuGet Feeds

Much of Encodo's infrastructure is now housed in Azure. Each employee has an account in Azure.

From Visual Studio

Because users are already authenticated in Visual Studio (to register it), they will be able to access Azure NuGet feeds through Visual Studio without any further intervention. You can restore/install/update without providing any additional credentials.

From the Command Line

As of today, access to Azure Feeds from the command line is granted only if you provide credentials with the source.

Sources created in the Visual Studio UI do not include credentials.

Solutions that include sources in a NuGet.config do not have credentials (because they are stored in the repository).

Therefore, you must register a NuGet source with authentication for Azure for your user.

Personal Nuget.Config

You can find your NuGet.config in your roaming profile on Windows, at C:\Users\<username>\AppData\Roaming\NuGet\NuGet.Config.

Instead of editing the file directly, use the NuGet command line to add an authenticated source.

Create a Personal Access Token (PAT)

You cannot just use your username/password to create an authenticated source. Instead, you have to use a PAT.

Follow the instructions below to create a PAT for your Azure account.

  • Log in to Azure
  • From the user settings (top-right), select Security
  • Select "Personal access tokens" in the list on the left
  • Press the "New Token" button at the top-left of the list
  • Name it NuGet Feed Access
  • Leave the organization at the default (encodo for employees)
  • Set the expiration to something reasonable.
    • 90 days is probably OK.
    • You can choose up to a year.
    • You can update the expiration date at a later time.
  • Select Custom defined for Scopes
  • Click the "Show all scopes" link at the bottom of the dialog (above the "Create" button)
  • Scroll down to "Packaging" and select "Read"
  • Press "Create" to add the token
  • Copy the token immediately. It will never be shown again.
  • Store the token somewhere safe (a password manager is a good idea). If you forget it, you'll have to regenerate the token.

Some extra tips:

  • Set up a reminder in your calendar for when your PAT is about to expire
  • You can change the expiration date for the token even after you've created it

Add an Authenticated NuGet Source

Now you can set up a NuGet source for your user.

Execute the following command, replacing the bracketed arguments as follows:

  • <username>: your own user name (e.g. <bob@encodo.ch>)
  • <PAT> the PAT you generated above
  • Change the URL for a feed other than Quino
nuget sources add -Name "Azure (Authenticated)" -Source https://encodo.pkgs.visualstudio.com/_packaging/Quino/nuget/v3/index.json -UserName <username> -Password <PAT>
Source Link Flakiness in Visual Studio 2017 and 2019

tl;dr: If MSBuild/Visual Studio tells you that "the value of SourceRoot.RepositoryUrl is invalid..." and you have no idea what it's talking about, it might help to add the following to the offending project and the error becomes a warning.

<PropertyGroup>
   <EnableSourceControlManagerQueries>false</EnableSourceControlManagerQueries>
</PropertyGroup>

Microsoft introduced this fancy new feature called Source Link that integrates with NuGet servers to deliver symbols and source code for packages.

This feature is opt-in and library and package providers are encouraged to enable it and host packages on a server that supports Source Link.

Debugging Experience

The debugging experience is seamless. You can debug into Source-Linked code with barely a pause in debugging.

The only drawback is that you don't have local sources, so it's trickier to set breakpoints in sources that haven't been downloaded yet. When you had local sources, you could open the source file you wanted and set a breakpoint, knowing that the debugger would look for the file in that path and be able to stop on the breakpoint.

Also, Visual Studio's default behavior is to show all debugging sources in a single tab, so you don't even have all of the files open that you looked at when your debug session ends. If you hover the tab, you can figure out the storage location, but it's a long and not very intuitive path. Also, it only contains the sources that you've already requested.

Still, it's a neat feature.

Getting Pushy

However, Microsoft is doing some things that suggest that the feature is no longer 100% opt-in. The following error message cropped up in a project with absolutely no Source Link settings or packages. It doesn't even directly use packages that have Source Link enabled (not that that should make a difference).

There are actually three problems here:

  1. The compiler is complaining about Source Link settings on a project that hasn't opted in to Source Link.
  2. The compiler is breaking the build when Source Link cannot be enabled as expected.
  3. The error/warning messages are extremely oblique and give no indication how one should address them. (Another example is the warning message shown below.)

It's the second one that make this issue so evil. The issue crops up literally out of nowhere and then prevents you from working. The project builds. Even if I wanted Source Link on my project but it wasn't set up correctly, this is no reason to prevent me from running/debugging my product.

And, honestly, because of reason #3, I'm still not sure what the actual problem is or how I can address it with anything but a workaround.

Because, yes, I found a workaround. Else, I wouldn't be writing this article.

Things that Didn't Work

The first time I encountered this and lost hours of precious time, I "fixed" it by removing Source Link support for some packages that my product imports. At the time, I thought I was getting the error message because TeamCity was producing corrupted packages when Source Link was included. It was not a quick fix to open up a different solution, remove Source Link support and re-build all packages on CI, but it seemed to work.

Upon reflection and further reading, this is unlikely to have been the real reason I was seeing the message or why it magically went away. Source Link support in a NuGet server involves having access to source control in order to be able to retrieve the requested sources.

It's honestly still unclear to me why Visual Studio/MSBuild is complaining about this at build-time in a local environment.

The Workaround

Today, I got the error again, in a different project. The packages I'd suspected yesterday were not included in this product. Another, very similar product used the exact same set of packages without a problem.

Even though the issue Using SourceLink without .git directory isn't really the issue I'm having, I eventually started copying stuff from the answers into the project in my solution that failed to build.

Add the following to any of the offending projects and the error becomes a warning.

<PropertyGroup>
   <EnableSourceControlManagerQueries>false</EnableSourceControlManagerQueries>
</PropertyGroup>

The ensuing warning? I can't help you there. I threw in a few other directives into the project file, but to no avail. I'm not happy to have a compile warning for a feature I never enabled and cannot disable, but I'm hoping that Microsoft will fix this sooner rather than later.