System.Net.Sockets.SocketException: An address incompatible with the requested protocol was used. Error Code: 10047
Which means that I have specified one type of protocol but are using another. So why not figure out how to create an IPv4 and IPv6 agnostic solution? Which I did.
Initially the problem I ran into was caused by the Dns.GetHostAddressesAsync method, as it resolves a DNS name to possible multiple IP addresses (e.g. IPv4 and IPv6), but which one should the system choose? With this protocol agnostic solution, it doesn’t matter.
As the BCL (Base Class Library) is not designed to support more than one protocol at the time, it looks at bit clunky when IPv4 addresses are imitating IPv6 adresses, but it works. IPv4 will appear like ::ffff:127.0.0.1, so you will have to use properties and methods like IsIPv4MappedToIPv6 and MapToIPv4.
First you need to specify IPv6 as the protocol via AddressFamily.InterNetworkV6 and then set DualMode to true, which in effect sets SocketOptionName.IPv6Only to false.
// TCP Server var listener = new TcpListener(new IPEndPoint(IPAddress.IPv6Any, 8080)); listener.Server.DualMode = true; listener.Start();
// TCP Client var client = new TcpClient(AddressFamily.InterNetworkV6); client.Client.DualMode = true; await client.ConnectAsync("127.0.0.1", port: 8080);
// UDP Server var listener = new UdpClient(AddressFamily.InterNetworkV6); listener.Client.DualMode = true; listener.Client.Bind(new IPEndPoint(IPAddress.IPv6Any, 8080));
// UDP Client var client = new UdpClient(AddressFamily.InterNetworkV6); client.Client.DualMode = true; client.Connect("127.0.0.1", port : 8080);
For simplicity I used TcpListener, TcpClient and UdbClient event for the UDP server. I set the DualMode property on the underlying Socket via the client.Client property. The server will open a dual Socket and the client will open either an IPv4 or IPv6 depending on the address given.
Full code samples for both TCP and UDP can be found at my DotNetCore-DualNetwork-IPv4IPv6 GitHub repo.
During my investigations, I discovered that depending on the Socket class constructor used, the DualMode property was set to true or false. The Socket(SocketType, ProtocolType) constructor defaulted the DualMode to true, but all others to false. None of the TcpListener, TcpClient or UdbClient instanciates a Socket with DualModel true.
]]>Years ago, we all moved our Wi-Fi to WPA security protocol as WEP was deemed unsecure. Now vulnerability is found in WPA1/2 too, making it possible for malicious attackers to inspect and modify the tracking between computer and access point.
The vulnerability is known as KRACK (Key Reinstallation Attacks) and are in the Wi-Fi standard, so all devices are affected – laptops, access points, printers, phones… anything with Wi-Fi. The vulnerability is at client side, but many access points acts are repeaters etc., so do patch all Wi-Fi devices, otherwise the communication might be compromised.
Bleeping Computers are keeping a list of affected devices and firmware and driver updates to mitigate the problem.
Microsoft fixed the issue on October 10th and rolled out the update on Patch Tuesday, so if you are keeping your device up-to-date, then you are all safe. Apple has not yet released a patch.
It does not sound intimidating, but this is a major vulnerability affecting Google Chromebooks, HP, Lenovo and Fujitsu PCs and laptops, SmartCards, routers, IoT devices – all devices that has a hardware secure chip (like TPM) from Infineon Technologies produced since 2012.
RSA keys are used to securely store secrets such as passwords, encrypt data (e.g. BitLocker) and generate certificate keys used in secure communication and sender/receiver attestation. It also affects digital ID’s such as those used by the Estonian government, based on SmartCard technology.
The RSA generated prime numbers are not truly random, making it possible to crack the private key via the public key. Depending on the size of the RSA key, they estimate it takes:
Based on an Intel E5-2650 v3@3GHz Q2/2014. 97 CPU days are nothing, as the crack can run in parallel and compute resources have become cheap with all the Cloud offerings.
To mitigate the ROCA security vulnerability requires firmware upgrades of the hardware secure chip. Microsoft (Windows 7+) and Google and others has released patches.
Read more about the ROCA CVE-2017-15361 vulnerability.
It is time to update all my devices, but it is going to take time.
This is the list of devices I need to update: 5 Laptops, 1 Chromebook, 1 Xbox One, 1 Windows tablet, 2 iPads, 2 iPhones, 1 Android, 1 Windows Phone, 1 Ubiquity Access Point, 1 Sagemcom router (Owned by the ISP), 1 Amazon Echo, 1 Samsung TV, 1 Panasonic TV and countless IoT devices
Who am I kidding. Some of my devices are old (like my TVs), so the manufacturer will properly never release a patch.
Like it is not worrisome enough, then a privilege-escalation vulnerability in Linux kernel was also discovered. This means even more stuff to update.
]]>
My machine needs to be secure, so Secure Boot and encrypted drive via BitLocker is a must. It limits the risk of someone messing with my machine and stealing my data.
Here is how to create a bootable USB that works with Secure Boot enabled:
Do remember to enable Secure Boot in the BIOS settings and set it in setup mode/clear the keys.
GTP (GUID Partition Table) is a replacement form MBR partitioning allowing for larger disk sizes which requires a 64-bit OS. Read more.
]]>I wanted to create a Hello World image that works on Windows Containers. I chose to create the Hello-World program using .NET Core – it runs cross platform (Linux, Mac & Windows), but I’m using Windows.
On the Windows Server 2016 container host I created in the previous blog post, I needed to install .NET Core. You can either go to http://dot.net/core to download or execute the following PowerShell to download the installer.
Invoke-WebRequest https://go.microsoft.com/fwlink/?LinkID=809122 -OutFile c:\dotnetinstall.exe
I created a folder called HelloWorld and executed the following commands in a command prompt:
PS C:\Users\aly\Desktop\HelloWorld> dotnet new Created new C# project in C:\Users\aly\Desktop\HelloWorld. PS C:\Users\aly\Desktop\HelloWorld> dotnet restore log : Restoring packages for C:\Users\aly\Desktop\HelloWorld\project.json... log : Writing lock file to disk. Path: C:\Users\aly\Desktop\HelloWorld\project.lock.json log : C:\Users\aly\Desktop\HelloWorld\project.json log : Restore completed in 1049ms. PS C:\Users\aly\Desktop\HelloWorld> dotnet run Project HelloWorld (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing Compiling HelloWorld for .NETCoreApp,Version=v1.0 Compilation succeeded. 0 Warning(s) 0 Error(s) Time elapsed 00:00:01.8161040 Hello World! PS C:\Users\aly\Desktop\HelloWorld> dotnet publish Publishing HelloWorld for .NETCoreApp,Version=v1.0 Project HelloWorld (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation. publish: Published to C:\Users\aly\Desktop\HelloWorld\bin\Debug\netcoreapp1.0\publish Published 1/1 projects successfully PS C:\Users\aly\Desktop\HelloWorld>
You can use any executable, I just needed something simple to work with. .NET Core is my simple choice.
I am going to base my image on the microsoft/windowsservercore image. But instead of installing .NET Core myself, I can use an image that already has .NET Core installed called microsoft/dotnet:windowsservercore. See all the official Microsoft Docker images at the Microsoft Docker Hub repository. So pull down the microsoft/dotnet:windowsservercore image
docker pull microsoft/dotnet:windowsservercore
If you want to play with it run the following command to start a container and a command prompt will appear. Type exit when you want to go back to the container host.
docker run -it microsoft/dotnet:windowsservercore
Now let’s create the Hello World image. Create a file called Dockerfile without extension outside the /HelloWorld folder and add this:
FROM microsoft/dotnet:windowsservercore MAINTAINER Anders Lybecker (@Lybecker) ADD /helloworld/bin/Debug/netcoreapp1.0/publish/ c:\\code ENTRYPOINT dotnet c:\\code\\helloworld.dll
Line 1 dictates that I’m basing the Hello World image on the official image microsoft/dotnet:windowsservercore
Line 2 is just maintainer information and is optional
Line 3 adds the content of the publish folder of the container host to the image in c:\code
Line 4 sets the default command to execute the Hello World program
For more details see the Dockerfile reference.
Build the container by executing the docker build command. The image is named myhelloword and tagged with v1 for version 1. The last . specifies where the Dockerfile is located.
PS C:\Users\aly\Desktop> docker build -t myhelloworld:v1 . Sending build context to Docker daemon 341.5 kB Step 1 : FROM microsoft/dotnet:windowsservercore ---> 1e21a0790e96 Step 2 : MAINTAINER Anders Lybecker (@Lybecker) ---> Running in 6812a0ecf67d ---> 5dce994e32a4 Removing intermediate container 6812a0ecf67d Step 3 : ADD /helloworld/bin/Debug/netcoreapp1.0/publish/ c:\\code ---> 22fade758f23 Removing intermediate container 665f9e677611 Step 4 : ENTRYPOINT dotnet c:\\code\\helloworld.dll ---> Running in dad461888c4c ---> 204ed6ed0b59 Removing intermediate container dad461888c4c Successfully built 204ed6ed0b59
That’s it. You have now created your own image. Let’s try to run it
PS C:\Users\aly\Desktop> docker run myhelloworld:v1 Hello World!
You can see the image in the local image repository with the docker images command.
PS C:\Users\aly\Desktop> docker images REPOSITORY TAG IMAGE ID CREATED SIZE myhelloworld v1 204ed6ed0b59 10 minutes ago 8.111 GB microsoft/dotnet windowsservercore 1e21a0790e96 2 weeks ago 8.111 GB microsoft/windowsservercore 10.0.14300.1030 02cb7f65d61b 10 weeks ago 7.764 GB microsoft/windowsservercore latest 02cb7f65d61b 10 weeks ago 7.764 GB
The myhelloworld:v1 image is based on the microsoft/dotnet:windowsservercore which is based om microsoft/windowsservercore:10.0.14300.1030. You can see all the layers via the docker history command
PS C:\Users\aly\Desktop> docker history myhelloworld:v1 IMAGE CREATED CREATED BY SIZE COMMENT 204ed6ed0b59 12 minutes ago cmd /S /C #(nop) ENTRYPOINT ["cmd" "/S" "/C" 46.58 kB 22fade758f23 12 minutes ago cmd /S /C #(nop) ADD dir:4bef0fa9bcfacdaa9bb8 40.96 kB 5dce994e32a4 12 minutes ago cmd /S /C #(nop) MAINTAINER Anders Lybecker 181.2 MB 1e21a0790e96 2 weeks ago cmd /S /C mkdir warmup && cd warmup & 40.96 kB 2 weeks ago cmd /S /C #(nop) ENV NUGET_XMLDOC_MODE=skip 4.756 MB 2 weeks ago cmd /S /C setx /M PATH "%PATH%;%ProgramFiles% 160.7 MB 2 weeks ago cmd /S /C powershell -NoProfile -Command 40.96 kB 2 weeks ago cmd /S /C #(nop) ENV DOTNET_SDK_DOWNLOAD_URL 40.96 kB 2 weeks ago cmd /S /C #(nop) ENV DOTNET_SDK_VERSION=1.0. 7.764 GB
I have made my Hello World image available on Docker Hub, so you can pull and run the image on your Windows Container host like so:
PS C:\Users\aly\Desktop> docker pull anderslybecker/dotnet-hello-world:windowsservercore windowsservercore: Pulling from anderslybecker/dotnet-hello-world Digest: sha256:1df17a8a38d969b71b38333be25f76757e69f1537c7a86a6ee966bca87163464 Status: Image is up to date for anderslybecker/dotnet-hello-world:windowsservercore PS C:\Users\aly\Desktop> docker run anderslybecker/dotnet-hello-world:windowsservercore Hello World!]]>
One thing to be aware of when working with containers is that the underlying host must be of the same type of operating system as the container running on the host. Linux containers on Linux hosts and Windows containers on Windows hosts.
First a container host is needed – you can use Windows 10 Anniversary Update or a Windows Server 2016.
The easiest way of getting started is to spin up a Windows Server 2016 on Azure (get a free trial) with the Container feature enabled.
Alternatively, you can follow Windows Containers on Windows Server guide to install the Container feature on an existing Windows Server 2016.
Once you have created the host you can connect to the host via RDP (in the Azure portal use the ”Connect” button in the top menu).
Start up a command prompt or the PowerShell. You can use PowerShell for Docker or the Docker CLI to execute Docker commands. The commands are the same across platform – no matter if you are using Linux or Windows-based containers. I’ll be using the Docker CLI commands. The common commands are:
Let’s get started.
Run the following command to see which images are available in the local repository.
PS C:\Users\aly\Desktop> docker images REPOSITORY TAG IMAGE ID CREATED SIZE microsoft/windowsservercore 10.0.14300.1030 02cb7f65d61b 10 weeks ago 7.764 GB microsoft/windowsservercore latest 02cb7f65d61b 10 weeks ago 7.764 GB PS C:\Users\aly\Desktop>
On my Windows Server 2016 Tech Preview 5 there are two images both with the name microsoft/windowsservercore with two different tags. Both of them has the same image id, so they are the same image with two different tags.
To start a container of the image tagged with ‘latest’ run the following:
PS C:\Users\aly\Desktop> docker run microsoft/windowsservercore:latest Microsoft Windows [Version 10.0.14300] (c) 2016 Microsoft Corporation. All rights reserved. C:\> PS C:\Users\aly\Desktop>
The tag is optional, but the default value is ‘latest’.
The container was started and a command prompt appeared, but then it shut down again and it returned to my PowerShell prompt.
If you want to interact with the container, add the -it (interactive) option. You also have the option of specifying which process should be run in the container (cmd is default for this image):
docker run -it microsoft/windowsservercore:latest cmd
Now a command prompt appears and you are in the context of the container. If you modify a file e.g. adds or deletes a file, then the changes will only apply to the container and not the host.
Create a simple file like this:
echo "Hello Windows Containers" > hello.txt
You can exit the container by typing exit, and the container will terminate. Alternatively, you can press CTRL + P + Q to exit and leave the container running.
If you left the container running, you can see the container by listing the Docker processes:
PS C:\Users\aly\Desktop> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 23ca16bb6fdb microsoft/windowsservercore:latest "cmd" 4 minutes ago Up 4 minutes pedantic_lamport
If the container was terminated, the -a option needs to be appended.
You can reattach to the container by specifying the container id or name. In my case 23ca16bb6fdb or pedantic_lamport like so:
Docker attach 23ca16bb6fdb
You only have the Windows Server Core image in the local repository, but you can download others by pulling from Docker Hub.
docker pull microsoft/nanoserver
Remember that only Windows-based images will run on a Windows host, so if you try the Hello-World Linux-based image, it will fail with a not so elaborate error message.
PS C:\Users\aly\Desktop> docker pull hello-world Using default tag: latest latest: Pulling from library/hello-world c04b14da8d14: Extracting [==================================================>] 974 B/974 B failed to register layer: re-exec error: exit status 1: output: ProcessBaseLayer C:\ProgramData\docker\winc266a137b0b1fffedf91d8cd6fcb6560f12afe5277e44bca8cb34ec530286: The system cannot find the path specified.
For now it is not easy to differentiate between Linux or Windows-based images on Docker Hub. I would have wished for a filter, making it easier to find relevant images.
Microsoft has a public repository of all the official released Microsoft container images.
]]>The community has built a number of packages containing great analyzers, fixes and refactorings. These can be installed either as a Visual Studio 2015 Extension or at project level as NuGet packages.
Refactoring Essentials contains approx. 200 code analyzers, fixes and refactorings
Simple defensive code analyzers like parameter checking.
Simplyfing code by converting conditional ternary to null coalescing.
CSharp Essentials focuses on the new features in C# 6 such as the nameof operator, string interpolation, auto-properties and expression-bodied methods.
Code Cracker is a smaller package for C# and VB with analyzers e.g. for empty catch blocks and if a disposable object is disposed.
SonarLint for C# has great analyzers too as Christiaan Rakowski points out in the comments. One of them warns about logical paths that will never be reached or simplified.
With Windows 10 and the new Universal Windows Platform you as the developer need to make sure that the Windows App does not use an API not supported on the platform you are targeting. This is exactly what the Platform Specific Analyzer package does for both C# and VB.
If you know any other great packages – let me know.
]]>Azure Automation is the right tool for the job. Azure Automation automates Azure management tasks and orchestrates actions across external systems from within Azure. You need an Azure Automation Account, which is a container for all your runbooks, runbook executions (jobs), and the assets that your runbooks depend on.
To execute runbooks, a set of user credentials needs to be stored as an asset. Create a new user as described in Azure Automation: Authenticating to Azure using Azure Active Directory.
Below, see guide on how to create the Azure Automation account and the runbook.
The new Azure Automation account lybAutomation and the runbook Stop Windows Azure Virtual Machines on a Schedule are created from the gallery. The content in the gallery comes from the Azure Script Center. The Azure Script Center has many PowerShell scripts covering many scenarios, but not all can be used with Azure Automation, as some scripts use features not available in Azure Automation. You do get a warning if you select one that is not supported, but in my mind, it should not be available in the gallery at all.
It burned me the first time I tried Azure Automation. I used the Stop Windows Azure Virtual Machines on a Schedule from the gallery, but it uses an on-premise scheduler.
You need to store the credentials in the runbook of the user created earlier. See below.
Then you need to configure the runbook script with the credentials and the Azure subscription where the virtual machines reside. See below.
You find your subscription name in the top bar “Subscriptions” of the Azure portal.
Now you can test your runbook and all you need is to set up the schedule, so it runs every evening. See guide below.
Be aware that the time is in UTC, so you have to correct the time according to your time zone. I expect the scheduler to get an overhaul, as it is too simple right now.
]]>I manage the Azure VMs and almost everything with the Server Explorer in Visual Studio. It is a quick way to start VMs in the morning.
If I have a list of VMs that I need to manage, then I use the Azure PowerShell cmdlets – see my How-to start and stop Azure VMs via PowerShell.
Finally, I use Azure Automation to ensure that I never have a running Azure VM all night, just because I forgot to shut it down – see How-to start and stop Azure VMs at a schedule. It automatically shuts down any VM running in my MSDN Subscription at 6 p.m. If I work later, I can just start the required VMs again – it only take a couple of minutes.
]]>Add-AzureAccount
In the sign-in window, provide your Microsoft credentials for the Azure account.
If you like me have multiple Azure subscriptions – change the default subscription with:
Select-AzureSubscription [-SubscriptionName]
To start an Azure VM the syntax is:
Start-AzureVM [–Name] [-ServiceName]
To start a VM named vs2015 in the cloud service lybCloudService requires as little as:
Start-AzureVM vs2015 lybCloudService
To stop the VM is just as easy
Stop-AzureVM [-Name] [-ServiceName]
If it is the last running VM in the cloud service, then you will be asked if you want to deallocate the cloud service or not, as the cloud service will release the public IP address. That is not a problem if you access your VM via DNS name – which most people do.
You can override the question by appending –Force like this:
Stop-AzureVM vs2015 lybCloudService –Force
There are many useful Azure PowerShell cmdlets to use. To list all Azure PowerShell cmdlets:
Help Azure
Get details on Azure PowerShell cmdlet:
Man <cmdlet name>
List all VMs:
Get-AzureVM
Get details of a specific VM:
Get-AzureVM [–Name] [-ServiceName]
The PowerShell prompt is just like a normal command prompt, so you can use tab completion and F7 to show all executed commands.
]]>A small but still significant feature in C# 6 is index initializers. Index initializers can be sued to initialize object members, but also dictionaries. Initializing a dictionary has always be cumbersome, but not anymore.
var numbers = new Dictionary<int, string> { [7] = "seven", [9] = "nine", [13] = "thirteen" };
There are other great new features in C# that I have not touched – have a look at the blog post New features in C# 6 by Mads Torgersen, Principal Program Manager, VS Managed Languages.
]]>At first auto-property initializers does not sound very interesting at all, but wait…
Simple things as setting a default value for a property.
public class Order { public int OrderNo { get; set; } = 1; }
Or using the getter-only auto-property which are implicit declared readonly
and can therefore be set in the constructor.
public class Order { public Order(int orderNo) { OrderNo = orderNo; } public int OrderNo { get; } }
From my point of view the value of auto-properties comes to shine when used with list properties where the list has to be initialized.
public class Order { public IEnumerable<OrderLine> Lines { get; } = new List<OrderLine>(); }
I often forget to initialize a list property in the constructor and therefor get a NullReferenceException
when accessing the list property. Now I might even be able to omit the constructor all together.
Expression-bodied methods make it possible for methods and properties to be used as expressions instead of statement blocks, just like lambda functions.
Let’s revisit the Person.ToString
method in the Awesome string formatting blog post.
public class Person { public string Name { get; set; } public Address HomeAddress { get; set; } public override string ToString() { return string.Format("{Name} lives in {HomeAddress?.City ?? "City unknown"}."); } }
The ToString
method can be written as a lambda function.
public override string ToString() => string.Format("{Name} lives in {HomeAddress?.City ?? "City unknown"}.");
And simplified with String interpolation.
public override string ToString() => $"{Name} lives in {HomeAddress?.City ?? "City unknown"}.
Use expression-bodied methods anywhere…
public Point Move(int dx, int dy) => new Point(x + dx, y + dy); public static Complex operator +(Complex a, Complex b) => a.Add(b); public static implicit operator string (Person p) => "{p.First} {p.Last}";]]>
Using the versatile string.Format
required a lot of typing and keeping the numbed placeholders in sync with the method parameters.
var numerator = 1; var denominator = 2; Console.WriteLine("Fraction {0}/{1}", numerator, denominator); // Output: // Fraction 1/2
In C# 6 is it a lot easier with String interpolation:
var numerator = 1; var denominator = 2; Console.WriteLine("Fraction {numerator}/{denominator}"); // Output: // Fraction 1/2
Referencing the variable, property or field directly within the string. It is even possible to access properties or use expressions.
public class Person { public string Name { get; set; } public Address HomeAddress { get; set; } public override string ToString() { return string.Format("{Name} lives in {HomeAddress.City}."); } }
The string.Format
is not event needed, but use the shorthand notation $.
return $("{Name} lives in {HomeAddress.City}.";
This is easily combinable with an expression and the null-conditional operator (?.) operator.
return $"{Name} lives in {HomeAddress?.City ?? "City unknown"}.";]]>
The nameof
operator takes a class, method, property, field or variable and returns the string literal.
var p = new Person(); Console.WriteLine(nameof(Person)); Console.WriteLine(nameof(p)); Console.WriteLine(nameof(Person.Name)); Console.WriteLine(nameof(Person.HomeAddress)); // Output: // Person // p // Name // HomeAddress
This is handy when doing input validation by keeping the method parameter and the parameter name of the ArgumentNullException
in sync.
public Point AddPoint(Point point) { if (point == null) throw new ArgumentNullException(nameof(point)); }
The nameof
operator is useful when implementing the INotifyPropertyChanged
interface
public string Name { get { return _name; } set { _name = value; this.OnPropertyChanged(nameof(Name)); } }
The Chained null checks blog post shows how to simplify triggering event in the OnPropertyChanged with the null-conditional operator.
]]>Null-conditional operators is one of the features in C# 6 that will save the world from a lot of boilerplate code and a bunch of NullReferenceExceptions
. It works as chained null checks!
Console.WriteLine(person?.HomeAddress?.City ?? "City unknown");
Note the null-conditional operator (?.) after person
and HomeAddress
, it returns null and terminates the object reference chain, if one of the references are null.
It is the same logic as the below code that you can use today.
if (person != null) person.HomeAddress != null) { Console.WriteLine(person.HomeAddress.City); } else { Console.WriteLine("City unknown"); }
The null-conditional operator will also make it easier to trigger events. Today it is required to reference the event, check if it is null before raising the event like so.
protected void OnPropertyChanged(string name) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(name)); } }
But the null-conditional operator provides a tread-safe way of checking for null before triggering the event.
PropertyChanged?.Invoke(this, args);]]>
Let us extend the implementation and make full use of the AJAX capabilities.
Wrap the entire table in a div-tag and give it an id of content – it will enable us to replace the table without refreshing the entire webpage.
<div id="content"> <table> ... removed for brevity </table> <div id="contentPager"> @Html.PagedListPager(Model, page => Url.Action("Index", new { page })) </div> </div>
Also wrap the @Html.PagedListPager
in a div-tag and set the id to contentPager – it will let us alter the behavior of the click-event.
The below JQuery code attaches the anonymous function to every a-tag within the contentPager element and the function replaces the html within the content element with whatever is returned from the AJAX call.
$(document).on("click", "#contentPager a", function () { $.ajax({ url: $(this).attr("href"), type: 'GET', cache: false, success: function (result) { $('#content').html(result); } }); return false; });
Move everything within the content element to a new view – let us call the new view List.
@model PagedList.IPagedList<ContosoUniversity.Models.Student>; @using PagedList.Mvc; <div id="content"> <table class="table"> ... removed for brevity </table> <div id="contentPager"> @Html.PagedListPager(Model, page => Url.Action("List", new { page })) </div> </div>
Notice in above highlighted code, that the Action URL is changed to List, which is the name of the Action we need to add to the StudentController
.
public ActionResult List(int? page) { var students = from s in db.Students orderby s.LastName select s; int pageSize = 3; int pageNumber = (page ?? 1); return View(students.ToPagedList(pageNumber, pageSize)); }
The functionality of the new List Action is the same as in the existing Index Action. Just move all the code from the Index Action, so it just returns the default view as below.
public ViewResult Index() { return View(); }
To wrap it up, the Index view needs to call the List Action to render the table in the Index view.
So the Index view ends up looking like this.
<link href="~/Content/PagedList.css" rel="stylesheet" type="text/css" /> @{ ViewBag.Title = "Students"; } <h2>Students<h2> @Html.Action("List") @section scripts { <script language="javascript" type="text/javascript"> $(document).ready(function () { $(document).on("click", "#contentPager a[href]", function () { $.ajax({ url: $(this).attr("href"), type: 'GET', cache: false, success: function (result) { $('#content').html(result); } }); return false; }); }); </script> }
That is it.
The solution is inspired by this StackOverflow question.
Download the complete solution, build and open the Student page. If running the solution in Visual Studio 2015+, then change the data source connection string in web.config to (localdb)\MSSQLLocalDB as the default SQL Server LocalDB instance name has changed.
Update May 15th 2016: The PagedList.Mvc NuGet package is no longer maintained, but a fork of the project is available and maintained called X.PagedList.MVC. I have updated the post and the source to use this new package.
Juanster and others pointed out a bug in the sample, but it was actually in the PagedList.MVC. I created a pull request for X.PagedList.MVC, which is now part of the NuGet package.
]]>It is simple to use the property mapping, but you can also use AutoMap combined with one or more overriding mappings. That is often useful when using the MongoDB IdGenerators. However, how do you specify which to use without using attributes?
Below is a simple class with no dependencies – only dependent on the .NET Base Class Library:
public class Person { public string Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
Notice that not event an ObjecId is present.
To instruct MongoDB to generate a unique identifier for the Id property:
public class Person { [BsonId] public string Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
But now there is a dependency on the BSonId attribute. We can remove it via a Class Map:
BsonClassMap.RegisterClassMap(cm => { cm.AutoMap(); cm.SetIdMember(cm.GetMemberMap(x => x.Id) .SetIdGenerator(StringObjectIdGenerator.Instance)); });
It is possible to use other data types as unique identifiers; just choose the correspond IdGenerator.
]]>This is simple to achieve with MongoDB via the atomic operation $inc. If you use the Official MongoDB C# Driver the operation is exposed via the Update class.
To implement auto-increment or sequence functionality with MongoDB a document is required to keep the state of the sequence. The Id is the natural name of the sequence e.g. orderid and the value is the current counter value.
class Counter { public string Id { get; set; } public int Value { get; set; } }
Then getting the next ordered is done by executing the below statement:
var client = new MongoClient(connectionString); MongoServer server = client.GetServer(); MongoDatabase db = server.GetDatabase("myDatabase"); var counterCol = db.GetCollection("counters") var result = counterCol.FindAndModify(new FindAndModifyArgs() { Query = Query.EQ(d => d.Id, "orderId"), Update = Update.Inc(d => d.Value, 1), VersionReturned = FindAndModifyDocumentVersion.Modified, Upsert = true, //Create if the document does not exists });]]>
Softie is internal slang for Microsoft employee.
For a couple of years I have had my own company Avior together with my partner. We had fun times and difficult times, but we did what we loved – developed software. Late September I started talking with Microsoft Denmark about the position as technical evangelist. At first I was reluctant as I was afraid to lose my technical competence and leaving my own company, but I was intrigued. I finally agreed to leave Avior and join Microsoft after a couple of conversations with current and previously Microsoft employees – they all spoke fondly about Microsoft – if I could cope with the politics and ceremony.
An evangelist advocates the evangelium, which means ‘good news’. All Latin, nothing religious – but in my case just technical
It is about connecting people who have problems with a product, technology and knowledge needed in order for them to succeed. In my mind, it is all about authentic content, communication, and community. I wish to spread knowledge and help other developers while keeping my integrity.
Now 3 months in, I find myself at home at Microsoft, but I still feel like a n00b. There are so many people and internal processes that I need to familiarize myself with that I sometimes feel dizzy and do not feel that I am contributing enough.
I am catching up on the Windows 8, Windows Phone 8 and Azure – which is the new stuff at Microsoft. It is a lot of ground to cover, so I do no longer fear for my technical competencies as I am spending much time studying and helping customers with technical issues.
I wish to engage the community more in the New Year, so I am busy planning talks and the Danish Developer Conference.
One request for you – let me know how I am doing, please.
Merry Christmas and happy New Year.
]]>Here is a great resource for enabling diagnostics in Azure.
]]>Polly is an easy to use retry and circuit breaker pattern implementation for .Net – let me show you.
Start by specifying the policy – what should happen when an exception thrown:
var policy = Policy .Handle<SqlException(e => e.Number == 1205) // Handling deadlock victim .Or<OtherException>() .Retry(3, (exception, retyCount, context) => { // Log... });
The above policy specifies a SqlExeption with number 1205 or OtherException should be retried three times – if it still fails log and bobble the original exception up the call stack.
var result = policy.Execute(() => FetchData(p1, p2));
It is also possible to specify the time between retries – e.g. exponential back off:
var policy = Policy .Handle<MyException>() .WaitAndRetry(5, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt) );
Or the circuit breaker safeguarding against the same error occurs again and again if an external system is temporarily unavailable:
var policy = Policy .Handle<TimeoutException>() .CircuitBreaker(2, TimeSpan.FromMinutes(1));
Go get it – I’m already using it
]]>public class CalculatorController : ApiController { public int Add(HttpRequestMessage requestMessage, int x, int y) { var accessToken = requestMessage.Headers.Authorization.Parameter; // use the HTTP header return x + y; } }
The HttpRequestMessage is automatically bound to the controller action, so you can still execute the action like http://localhost/calculator/add?x=3&y=2
Simple and easy.
]]>I followed a couple of sessions the continuous delivery by Sam Newman, Michael T. Nygard (author of Release It) and Jez Humble (author of Continuous Delivery).
Continuous Integration is a prerequisite of Continuous Delivery, but many still don’t use apply Continuous Integration to their solution, with daily incremental check-ins, automated build and unit tests.
To simplify Continuous Delivery, everything must be automated. To ease the task of automation, things must be simplified. To simplify, start by decomposing the system into manageable pieces, so each can be deployed separately. How?
Decompose the system into disconnected services makes it easier to deploy a subset of the system. This limits the impact of a deployment. It even makes it possible to mitigate the risk further by making small incremental changes by only deploying one subsystem at the time.
These services have to be structured as application silos and share nothing, not even the database schema.
By automating and decomposing your system into disconnected application silo services you too can do Continuous Delivery.
After the conference the GOTO Aarhus guys had joint up with the local community and user groups to hos open sessions. I attended the ANUG (Aarhus .NET User Group) session with Anders Hejlsberg. He presented the brand new TypeScript – a superset of JavaScript that compiles into plain JavaScript and runs in any browser (similar concept as CoffeeScript). It has great tooling support in Visual Studio with intelliSense and static verification.
I’m looking forward to the last day of the conference tomorrow.
]]>Next up was great presentation of graph databases by Jim Webber – fast speaking provocative British architect from Neo4J. He (re)spiked my interest in ‘other’ databases and stressed that each type of database like relational, object, key-value stores, document, graph etc. databases each fit their problem domain. So you shouldn’t just pick RavenDB because it is the new hot think in .Net sphere (or because Ayende aka Oren Eini says so). I will definitely take a look Net4J with the .Net client library Neo4jClient . Another great point from Jim Webber was; ACID does scale (though many claims otherwise), but he stressed it was distributed ACID with 2PC that doesn’t scale.
From then on I attended a couple of unfortunate sessions (not worth mentioning). Now it is time for the conference party where the beer is sponsored by Atlassian.
]]>I’ve been looking at the conference schedule trying to create my schedule… the line-up of international fame speakers are impressive, but I’ll go for the odd sessions to expand my horizon. During breaks I’ll discuss and share ideas with my fellow attendees – I might even skip sessions for interesting discussions in the hallways.
Here is my tentative schedule:
The conference is covers diverse software development topics like big data, augmented data, agile perspectives, JavaScript, UX, continuous delivery, mobile, cloud, languages, NoSQL, scale … so this is not a vendor specific conference where the newest technology is presented.
I prefer conferences where I get inspired… a conference where all the participants; speakers and fellow participants plant seeds in my head for new ideas and alternative approaches to solving problems.
That’s why I’m going to the GOTO Aarhus conference.
]]>A ping request to Google.com show that a roundtrip takes around 800 ms with some fluctuations into the 1200 ms
Pinging google.com [173.194.70.113] with 32 bytes of data:
Reply from 173.194.70.113: bytes=32 time=681ms TTL=43
Reply from 173.194.70.113: bytes=32 time=869ms TTL=43
Reply from 173.194.70.113: bytes=32 time=705ms TTL=43
Reply from 173.194.70.113: bytes=32 time=750ms TTL=43
An Internet connection speed test reveals my upload was around 400 Kbit/s download and 15 Kbit/s upload.
A trace route didn’t disclose much information; therefore not included in this blog post.
The Internet connection is very unreliable making it impossible to work, but IM and light sites are browsable. Internet on a flight is a welcome initiative making it more pleasant to fly.
I just hope the competitors will do the same and the quality of the connection will improve.
]]>I enjoyed the Community Day immensely and I am looking forward to next year.
Download Solr separately from Apache Foundation.
]]>You might have success outsourcing if you find talent, but you will fail without!
Businesses neglect the importance of finding skilled and talented software developers when outsourcing, which will almost certainly lead to problems or failure in the long run.
It doesn’t matter if it is a project or IT services being outsourced – the people in the other end have to have skills and preferably talent.
Obtaining a degree or completing a certification does not proof that a person has skills. Just as managers never will employ a developer based on resume only, neither should outsourced developers. The business should setup quality parameters in the outsourcing contract or interview the developers themselves – but that is rarely feasible.
There are other essential parameters that should not be neglected like creativity, motivation and talent nurturing. All the regular personal management things needed, also applies for outsourcing.
Offshoring to low-cost countries just complicates things even further… as you have to consider the language barrier, culture differences and time zones also.
]]>Outsourcing software development can be a good thing for the business, especially if the area is not within the business’s main area of expertise or requiring too few developers to gather enough brain trust to keep the level of expertise.
If software development is not within the business area of expertise then the area will often be neglected leading to low morale and lack of commitment. It is not seen as an important part of the business, but necessary evil. The developers will not have the best tools possible or access to new knowledge like inspiration at conferences. This is a downwards spiral of developer skills and will lead to failure eventually.
If the business only has a small number of developers with similar skillset, then the ability to share knowledge is impaired. Developers that have no one or less than a handful of coworkers to share knowledge with, will almost never be very skilled. Knowledge workers require peers to stay knowledgeable.
If both scenarios above are combined, then the problems become very evident and will never lead to success.
In either case outsourcing makes sense and will in most cases provide business value.
Outsourcing to low-cost countries aka offshoring complicates things even further and should not be considered before thorough scrutiny of your business. Does the business employ the required competency, are the procedures in place and is the organization mature enough?
Due to the magnitude required by preliminary analysis, offshoring only makes economic sense for larger scale operations and is not viable for smaller businesses.
Update Feb 28. 2013: A great blog post Is Offshoring Less Expensive? Exposing Another Management Myth
]]>
Download the full one-page comic.
The .Net Framework 4.0 provides the new default behavior of background garbage collection.
]]>I was on a leisure trip to Rome, Italy to see the sights. A beautiful city with many cites like the Vatican, the Colosseum and the Spanish Steps. I was supposed to flight directly to Manila, Philippines from Rome to assist a customer. The customer was finalizing my travel plans while I was in Rome. Unfortunately I lost my mobile phone in Rome which made it rather difficult to coordinate the travel plans, but after 3 or 4 different travel itineraries the flight was booked from Rome to Italy via Seoul, Korea.
I arrived in Manila through Seoul only to find out the hotel was not confirmed. To make things worse, they were fully booked and so were all the other hotels in the Makati area in Metro Manila. After an hours searching I managed to find a hotel room for the night, but I had to find another hotel for the next day.
Apparently available rooms where in short supply in Makati area as I had to change hotel the next five days. I could not book a consecutive reservation at the same hotel. I slept in rooms ranging from extravagant 150 m2 suites to 15 m2 crummy hotel room with ants in my bed. It was tiring, but the weekend retreat to lovely Philippine island of Bohol the following weekend made me see everything in a brighter light.
Friday I had to catch the flight to Bohol, so I took a taxi to the airport. Unfortunately the taxi was barely able to carry its own weight up the Skyway ramp and half way it gave up and broke down. I was now stuck in the middle of Manila with no other available taxi in sight and I was now late and might not make the flight to the lovely island of Bohol. I tried to persuade a tricycle to drive me to the airport, but they were not allowed to enter the airport area – then I tried to hire a Jeepney, but the driver was overly greedy and my attempt to barging failed. Luckily a taxi appeared from nowhere and I was on my way to the airport.
I arrived 25 minutes after the check-in was closed and 5 minutes before departure. I was immediately redirected to the supervisor, who luckily let me check-in – I rushed through the security check and directly onto the waiting flight.
It was a great weekend retreat to Bohol, where I say the Tarsier, Chocolate Hills and snorkeled at the coral reef where I saw clown fish and a turtle.
Back in Manila and an additional week work it was Friday and time to travel back home to Copenhagen, Denmark. Due to the confusion of the travel itineraries I apparently was supposed to travel home the day before, Thursday and not Friday. I was too late, as it was already Friday. So I had to find another flight from Manila to Copenhagen the same day… With some help from the very helpful Filipino Lee, I managed to get a flight Friday night with Thai Airways through Bangkok, Thailand.
It was a long trip home as Thai Airways does not have inflight entertainment systems in any of their aircrafts – I thought it was standard in this day and age.
I’m now home – still without a mobile phone. Fortunately I can already look back at this unfortunate trip a laugh. I enjoyed the trip both to Rome and the Philippines even though there where so many things working against me.
]]>CloudDrive is the obvious solutions, as it is comparable to on premise file systems with mountable virtual hard drives (VHDs). CloudDrive is however not the optimal choice, as CloudDrive impose notable limitations. The most significant limitation is; only one web role, worker role or VM role can mount the CloudDrive at a time with read/write access. It is possible to mount multiple read-only snapshots of a CloudDrive, but you have to manage creation of new snapshots yourself depending on acceptable staleness of the Lucene indexes.
The alternative Lucene index storage solution is Blob Storage. Luckily a Lucene directory (Lucene index storage) implementation for Azure Blob Storage exists in the Azure library for Lucene.Net. It is called AzureDirectory and allows any role to modify the index, but only one role at a time. Furthermore each Lucene segment (See Lucene Index Segments) is stored in separate blobs, therefore utilizing many blobs at the same time. This allows the implementation to cache each segment locally and retrieve the blob from Blob Storage only when new segments are created. Consequently compound file format should not be used and optimization of the Lucene index is discouraged.
Getting Lucene.Net up and running is simple, and using it with Azure library for Lucene.Net requires only the Lucene directory to be changes as highlighted below in Lucene index and search example. Most of it is Azure specific configuration pluming.
Lucene.Net.Util.Version version = Lucene.Net.Util.Version.LUCENE_29; CloudStorageAccount.SetConfigurationSettingPublisher( (configName, configSetter) => configSetter(RoleEnvironment .GetConfigurationSettingValue(configName))); var cloudAccount = CloudStorageAccount .FromConfigurationSetting("LuceneBlobStorage"); var cacheDirectory = new RAMDirectory(); var indexName = "MyLuceneIndex"; var azureDirectory = new AzureDirectory(cloudAccount, indexName, cacheDirectory); var analyzer = new StandardAnalyzer(version); // Add content to the index var indexWriter = new IndexWriter(azureDirectory, analyzer, IndexWriter.MaxFieldLength.UNLIMITED); indexWriter.SetUseCompoundFile(false); foreach (var document in CreateDocuments()) { indexWriter.AddDocument(document); } indexWriter.Commit(); indexWriter.Close(); // Search for the content var parser = new QueryParser(version, "text", analyzer); Query q = parser.Parse("azure"); var searcher = new IndexSearcher(azureDirectory, true); TopDocs hits = searcher.Search(q, null, 5, Sort.RELEVANCE); foreach (ScoreDoc match in hits.scoreDocs) { Document doc = searcher.Doc(match.doc); var id = doc.Get("id"); var text = doc.Get("text"); } searcher.Close();
Download the reference example which uses Azure SDK 1.3 and Lucene.Net 2.9 in a console application connecting either to Development Fabric or your Blob Storage account.
Segments are the essential building block in Lucene. A Lucene index consists of one or more segments, each a standalone index. Segments are immutable and created when an IndexWriter flushes. Deletes or updates to an existing segment are therefore not removed stored in the original segment, but marked as deleted, and the new documents are stored in a new segment.
Optimizing an index reduces the number of segments, by creating a new segment with all the content and deleting the old ones.
I love the presentations, like this one, where everyone participates in the discussion. It makes the experience so much enjoyable and everyone benefits of the collective knowledge sharing.
The presentation and code samples can be downloaded below:
I recommend the book “Lucene in Action” by Eric Hatcher. The samples in this book are all in Java, but they apply equally to Lucene.Net, as it is a 1:1 port of the Java implementation.
]]>Vinderen af gårsdagens Microsoft Julekalender låge #7 fundet. Vinderen er Gianluca Bosco, som har indsendt følgende WCF klient til servicen:
class Program { static void Main(string[] args) { Console.WriteLine("Ready? Press [ENTER]..."); Console.ReadLine(); var factory = new ChannelFactory<Shared.IMyService>( new WSHttpBinding(), new EndpointAddress("http://localhost:8080/MyService")); factory.Endpoint.Binding.SendTimeout = new TimeSpan(0,2,0); var names = new[] { "Anders", "Bende", "Bo", "Egon", "Jakob", "Jesper", "Jonas", "Martin", "Ove", "Rasmus", "Thomas E", "Thomas" }; var x = from name in names.AsParallel() .WithDegreeOfParallelism(12) select Do(factory, name); x.ForAll(Console.WriteLine); Console.WriteLine("Done processing..."); Console.ReadLine(); } static string Do(ChannelFactory<Shared.IMyService> factory, string name) { var proxy = factory.CreateChannel(); var result = proxy.LooongRunningMethod(name); return result; } }
Gianluca har rigtig nok fundet den værste performance synder af dem alle, at man ikke skal instantier en ChannelFactory for hvert kald. Alene denne forbedring kan halvere tiden brugt ved et WCF kald.
Desuden fandt Gianluca den indbyggede fælde i min implementation. Server implementationen kalder Thread.Sleep (mellem 1 og 100 sekunder) for at simulere langvarigt arbejde. Default SendTimout på wsHttpBinding (og alle andre bindings) er 1 minut, hvilket betyder, at klienten vil få en TimeoutException pga. serverens lange arbejde.
Tillykke til Gianluca med hans nye helikopter.
Der er en mindre optimering, som kan forbedre performance yderligere og det er at kalde Open og Close på en Channel explicit. Det skyldes, at der i en implicit Open er thread synchronisation, således at kun én thread åbner en Channel og de resterende threads venter på at Channel er klar.
Hvis du har forslag til yderligere forbedringer, så skriv en kommentar.
]]>Sorry – this post is in Danish.
Dagens opgave handler om Windows Communication Foundation. WCF er kompleks pga. mængden af funktionalitet og kan derfor virke indviklet. Kompleksiteten afspejles også i størrelsen på WCF assembly System.ServiceModel.dll, som er klart den største assembly i hele .Net Framework Class Library (FCL) … selv større end mscorlib.dll.
Opgaven:
Implementer en klient til nedstående service, som benytter WSHttpBinding med default settings.
[ServiceContract(Namespace = "www.lybecker.com/blog/wcfriddle")] public interface IMyService { [OperationContract(ProtectionLevel = ProtectionLevel.EncryptAndSign)] string LooongRunningMethod(string name); } public class MyService : IMyService { public string LooongRunningMethod(string name) { Console.WriteLine("{0} entered.", name); // Simulate work by random sleeping var rnd = new Random( name.Select(Convert.ToInt32).Sum() + Environment.TickCount); var sleepSeconds = rnd.Next(0, 100); System.Threading.Thread.Sleep(sleepSeconds * 1000); var message = string.Format( "{0} slept for {1} seconds in session {2}.", name, sleepSeconds, OperationContext.Current.SessionId); Console.WriteLine(message); return message; } }
Klienten må meget gerne være smukt struktureret og skal:
Beskriv kort jeres valg af optimeringer.
For at gøre opgaven nemmere at løse, så har jeg allerede løst den for jer… dog ikke optimalt. Download min implementation.
Send løsning til anders at lybecker.com inden midnat; vinderen vil bliver offentligt i morgen og vil blive den lykkelige ejer af en fjernstyrret helikopter med tilbehør, så den er klar til af flyve. En cool office gadget. Helikopteren er nem at flyve og kan holde til en del. Det ved jeg af erfaring
Se helikopteren flyve nedefor.
]]>The presentation went very well judging by the number of questions during the almost 2½ hour long presentation and the feedback afterwards. Love it – thanks
The presentation and code samples can be downloaded below:
Please do contact me if you have any further questions – I’ll love to help out.
]]>Timeouts are not directly related to throttling properties, but effect the way the service (or client) performance under load. Timeout properties can be perceived as an annoyance when sending larger messages or dealing with slow connections or services. The frustration increase as the naming of the properties can be deceiving. Read on… and I’ll explain
Below are the binding properties that all throw TimeoutExceptions if any of setting thresholds are exceeded:
Example of configuration file:
<system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBindingConfig" openTimeout="00:01:00" closeTimeout="00:01:00" sendTimeout="00:01:00" receiveTimeout="00:10:00"> <reliableSession enabled="true" inactivityTimeout="00:10:00" /> </binding> </netTcpBinding> </bindings> </system.serviceModel>]]>
There are other throttling features in WCF that are designed to protect the service from request flooding.
These WCF throttling feature are configured on the binding, service behaviors and endpoint behaviors.
Binding properties:
There are two additional properties on the binding that one might mistakenly think is request throttling properties. These are the MaxBufferPoolSize and MaxBufferSize properties and they control WCF memory Buffer Manager.
Note: remember to set the MaxReceivedMessageSize and MaxBufferSize properties to the same value if using TransferMode.Buffered or an ArgumentException will be thrown at runtime with the message “For TransferMode.Buffered, MaxReceivedMessageSize and MaxBufferSize must be the same value.”
Binding properties for the readerQuotas element – used by XmlReader under the hood:
The DataContractSerializer is by default used to serialize and deserialize messages as it is much faster the XMLSerializer, but with less features. The DataContractSerializer has a single property that can be configures at the endpoint or service behavior:
Resist the temptation of settings any of these properties to Int.MaxValue and the likes, because determining the correct values are difficult. Throttle the service, so some clients gets served instead of risk boggling down the service with request flooding, resulting in no clients get served.
Example of configuration file:
<system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="endpointBehavior"> <dataContractSerializer maxItemsInObjectGraph="65536"/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="serviceBehaviors"> <dataContractSerializer maxItemsInObjectGraph="65536"/> </behavior> </serviceBehaviors> </behaviors> <bindings> <netTcpBinding> <binding name="netTcpBindingConfig" maxReceivedMessageSize="65536" maxConnections="10"> <readerQuotas maxArrayLength="16384" maxBytesPerRead="4096" maxDepth="32" maxStringContentLength="8192" maxNameTableCharCount="16384"/> </binding> </netTcpBinding> </bindings> </system.serviceModel>
Because of the very conservative settings many developers have run into what seems like WCF performance problems, but was actually incorrectly configured throttling settings.
WCF throttling is a service behavior configuration and each setting has effect dependent on the InstanceContextMode and ConcurrencyMode settings.
These throttling settings can be configured in code via the ServiceThrottlingBehavior in the System.ServiceModel.Description namespace or though configuration like below:
<system.serviceModel> <serviceBehaviors> <behavior name="throttlingServiceBehavior"> <serviceThrottling maxConcurrentCalls="16" maxConcurrentInstances="160" maxConcurrentSessions="10"/> </behavior> </serviceBehaviors> </system.serviceModel>
The default values in .Net 3.0/3.5 are:
The default has changed in .Net 4.0 as the .Net 3.0/3.5 default values were too conservative and the increase in server resources – especially the number of cores available. The default values for .Net 4.0 are:
The Environment.ProcessorCount property is misleading as the value is the number of cores (Hyper-Threading counts double). In my development laptop with four Hyper-Threading cores looks like this:
]]>I am delighted and lucky to continue working with Raoul and his new company Guide-line.
Congratulations – It’s about time
]]>POET exploits a well-known vulnerability in the way many websites encrypt text stored in ViewState, form authentication tickets, cookies, hidden HTML fields and request parameters.
It a deficiency in the encryption libraries in both Java and the .Net framework utilizing the fact that encrypted strings are padded in blocks of e.g. 8 bytes or 16 bytes or …. I will not go into details, as it is explained well in details here.
The exploit works on any block-cipher encryption mechanism, such as AES, DES and Triple DES.
The exploit is quite severe, as it can be used to download the web.config file.
The attack that was shown in the public relies on a feature in ASP.NET that allows files (typically javascript and css) to be downloaded, and which is secured with a key that is sent as part of the request. Unfortunately if you are able to forge a key you can use this feature to download the web.config file of an application (but not files outside of the application). We will obviously release a patch for this… Scott Gu
There are lots of systems affected, such as ASP.Net 1.0-4.0 (WebForms and MVC), SharePoint, Microsoft CRM, JavaServer Faces etc.
HTTPS with SSL/TLS does not protect your site.
Below is a video showing how to use the POET tool with DotNetNuke.
Scott Gu has workaround details until Microsoft releases a patch.
Update September 29th, 2010: A security update is released by Microsoft. More details about the patch on Scott Gu’s blog.
]]>Luckily you can change it, but it isn’t easy to find. Do the following:
It will still connect to all available network connections (wireless and wired), unless they are disabled.
]]>SQL Server FullText is easy to use in applications requiring string searching.
The Danish, Polish and Turkish wordbreaker and stemmer implementations for SQL Server FullText is not developed by Microsoft and therefore not enabled by default. The libraries are however part of the installation process and are therefore present on disk.
To make use of the Danish language capabilities in SQL Server 2008, register the libraries in registry and reload the FullText languages:
Now verify that Danish is enabled with this query: SELECT name FROM sys.fulltext_languages
Note: The DanishFullText.reg assumes that SQL Server is a default instance (not a named instance). If not, modify the file by changing the MSSQL10.MSSQLSERVER to the instance name.
It is the same case with Polish and Turkish – they are not registered by default. See more in the MSDN article How to: Load Licensed Third-Party Word Breakers.
List of out of the box SQL Server 2008 FullText supported languages: Arabic, Bengali (India), Brazilian, British English, Bulgarian, Catalan, Chinese (Hong Kong SAR, PRC), Chinese (Macau SAR), Chinese (Singapore), Croatian, Danish, Dutch, English, French, German, Gujarati, Hebrew, Hindi, Icelandic, Indonesian ,Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malay – Malaysia, Malayalam, Marathi, Neutral, Norwegian (Bokmål), Polish, Portuguese, Punjabi, Romanian, Russian, Serbian (Cyrillic), Serbian (Latin), Simplified Chinese, Slovak, Slovenian, Spanish, Swedish, Tamil, Telugu, Thai, Traditional Chinese, Turkish, Ukrainian, Urdu, Vietnamese.
]]>I can’t wait to see it in the cinema
PS. I do develop with Java even though I do not blog much about it.
Update: YouTube removed the video due to copyright claims. You can still see it JavaZone.
]]>Last week I was at Microsoft HQ in Redmond, WA, USA. I was invited by the SQL Azure Development Team to look at some of the new unreleased features and comment on features in their roadmap.
Unfortunately most of the content was confidential, meaning that I was under NDA, so I may not disclose any details. Sorry :-/
During the week with the SQL Azure Development Team I was fortunate to be engaged in technical detailed discussion about some of the upcoming feature releases – mainly discussing the SQL Server features not currently available in SQL Azure. It was interesting and enlightening at the same time to discuss their technical challenges and why they have build SQL Azure the way they have.
All in all, my conclusion after this event is that Microsoft takes SQL Azure seriously and it will become a major player in the RDBMS world. It will not just be a SQL Server in the cloud, but a separate product with different market segments and different features. I am looking forward to a bright future with SQL Azure
]]>If you have, then ApiChange is a tool for you. It’s open source, powerful and easy to use
I gave it a spin comparing current trunk version 2.9.2 of Lucene.Net with the latest official release version 2.4.0.
I downloaded ApiChange and ran the following command in a command prompt:
ApiChange.exe -Diff -old C:Lucene.Net_2_4_0Lucene.Net.dll -new C:trunkLucene.Net.dll
The output lists all the differences, but here is a summary:
Cool little tool with other features such as:
It’s based on Mono Cecil – a free IL parser, and not reflection as I initial thought. Go check it out…
]]>With assistance of Erich Gamma, I have identified four levels of reuse.
Duplicating code or functionality makes it easy to reuse it. It’s a real timesaver at first, but keeping all the duplicates up-to-date and maintaining them is horrifying task. Not to mention the problems when forgetting to update one or more duplicates…
“Copy and paste programming is a pejorative term to describe highly repetitive computer programming code apparently produced by copy and paste operations. It is frequently symptomatic of a lack of programming competence, or an insufficiently expressive development environment, as subroutines or libraries would normally be used instead. In certain contexts it has legitimate value, if used with care.” Wikipedia
Reuse at class level or a set of classes in a software library is common and also fairly easy with object-oriented languages.
“Libraries contain code and data that provide services to independent programs. This allows the sharing and changing of code and data in a modular fashion. Some executables are both standalone programs and libraries, but most libraries are not executables …” Wikipedia
Patterns allow you to reuse design ideas and concepts independent of concrete code.
“In software engineering, a design pattern is a general reusable solution to a commonly occurring problem in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved.” Wikipedia
An object-oriented abstract design to solve a specific problem – often very specialized, like Unit Testing frameworks and Object-Relational Mapping frameworks, but can be large, complex or domain specific.
“A software framework … is an abstraction in which common code providing generic functionality can be selectively overridden or specialized by user code providing specific functionality. Frameworks are a special case of software libraries in that they are reusable abstractions of code wrapped in a well-defined API, yet they contain some key distinguishing features that separate them from normal libraries.” Wikipedia
It’s all about being pragmatic – not all software will reach fourth level of reuse and will be structured as frameworks – frankly it shouldn’t. That said; copy/past style development is unquestionably a wrong path.
What level is your company at?
I’m in Prague, Czech for the Apache Lucene EuroCon 2010; wandered around, where I saw this drawing on a house wall.
I find it hilarious – especially the natural shadow over the coffins. It’s just by pure coincidence that I was there, at the time of day where the doorway cast its shadow over the coffins