How do you play sound in .NET Core apps? Is there a version of NAudio NuGet package for .NET Core or it’s equivalent? Sadly, playing sound is nowhere near as straight forward on .NET Core as it is on .NET Framework. And there isn’t a simple NuGet-based solution either. However, there is a way.
.NET Core certainly came a long way since Visual Studio 2017 was first released. It is now at the stage where the framework itself and the technologies that support it are mature enough to be used in production. However, although .NET Core can be deployed on any of the most widely-used operating systems and any CPU architecture that supports those, the framework is still pretty bare-bone compared to it’s predecessor, .NET Framework.
Many things that .NET Framework can do are very Windows-specific with no platform-independent equivalent, therefore .NET Core does not natively support those. One of such functionalities is the ability to play sound from the code.
With .NET Framework, you have native classes that support it, such as SoundPlayer from System.Media namespace and third-party NuGet packages, such as NAudio. Neither of these are available in .NET Core and, if you browse NuGet repository for sound libraries compatible with .NET Core, you’ll soon realize that there is nothing that will enable you to play sound from the code in a straight-forward manner.
At best, you can download some library that acts as a wrapper around some other assembly that needs to be compiled specifically for a particular operating system or a specific CPU architecture. As well as potentially not being available for a particular type of a machine, most of such libraries have a strictly enforced paid-for license.
Despite all this, there is a reliable and simple way of playing sound in .NET Core without having to splash out for an expensive software license. But first, let’s see why NAudio, one of the most popular NuGet packages for audio processing, cannot live up to this task.
Why can’t NAudio be used with .NET Core
Mark Heath, the author of NAudio, said the following on his blog:
Part of the reason for this is simply that NAudio is very Windows-centric. A large part of the codebase consists of P/Invoke or COM interop wrappers around the various Windows audio APIs. So even if a .NET Standard build were to be created, much of the functionality would fail to work if you tried to use it in a .NET Core app running on Linux.
Despite this, Mark has made some progress, as he then goes on to say that he was able to make changes to the library to get it to work on .NET Core, albeit with only limited functionality and only for when it runs on Windows. The changes have been packaged into version 1.9.0-preview1 on NuGet. However, making NAudio work on .NET Core in a cross-platform fashion is still a long way off, due to the differences in audio drivers architecture on different operating systems.
So, how can .NET Core play sound then? The answer is that it can’t. However, Node.js can! The good news is, however, that with NodeServices library officially provided for ASP.NET Core by Microsoft, you can launch Node.js apps from your .NET Core code and in a fully interactive manner.
There are plenty of ways to play sound in Node.js
Compared to .NET Core, Node.js has been around for a while and it is one of the most mature platform-independent technologies. NPM, the repository of dependencies used by Node.js projects, is officially the biggest repository in the world compared to its equivalent, such as NuGet. Because of this, there are countless of libraries on NPM that are specifically dedicated to playing audio.
One of the simplest NPM library for playing sound is the one that is, unsurprisingly, called play-sound. The way it works is very simple. The code doesn’t care what system it runs on, because he code itself doesn’t play anything. Instead, it can use one of the listed audio players, which it will find either if it has been installed into the same folder that the code is running from or has been mapped to the command line environment (e.g. through Path environmental variable).
Most of the players the library knows about are Linux-specific. Some of them, such as mpg123 and mplayer, are available for either x86/x64 or ARM CPU architecture. Some of them, such as OMX Player, are CPU-specific. The list also includes players available exclusively for Windows and Mac. All of these audio players are free and open-source; therefore this particular library will certainly enable you to play audio in any environment that can have .NET Core app installed.
This is how you call Node.js code from NodeServices
Using NodeServices in an ASP.NET Core app is easy. This article already covers the fundamentals. Basically, you use a standard ASP.NET Core request pipeline via Startup class, where you run AddNodeServices() extension method on IServiceCollection object that you pass into ConfigureServices() method.
To actually use the registered NodeServices, you need to resolve the object from ASP.NET Core Inversion of Control container and call InvokeAsync() method on it. However, not just any Node.js code can be invoked from NodeServices.
The article provides an example of a compatible application. In the nutshell, the Node.js module must have an export function, which takes any parameters supplied with InvokeAsync() call. However, it has one additional parameter at the beginning of the function signature, which is a pointer to a callback functions. The result is returned to InvokeAsync() when the callback function is called. An example provided by the article, which is probably suitable for most people, returns a single value, which has been passed as a parameter into the callback function when it’s called.
However, this is not the only parameter. There is also an additional parameter at the beginning which, in this case, was set as null. The additional parameter is important; otherwise your calling .NET code will throw an exception.
The only example of how to actually resolve NodeServices from Inversion of Control container that was provided by the article was doing it inside MVC controller. But what if you don’t have any controllers? For example, what if you happen to run a simple .NET Core console app, which happens to use some ASP.NET Core dependencies just to enable this functionality? This is a valid scenario. After all, unlike ASP.NET from .NET Framework, which was strictly used for web applications, ASP.NET Core app is nothing but additional thin web hosting functionality running on top of a bare-bone console app.
The answer to this is simple: you resolve the services the same way you resolve any other services. You can do it in the Configure() method of the Startup class by calling the following method:
Additionally, you can build your service provider within the ConfigureServices() method and perform the following:
var serviceProvider = services.BuildServiceProvider();
var service = serviceProvider.GetService();
Finally, you can resolve your services in Program class of your ASP.NET app. By default, Main() method of the program build the web host by calling BuildWebHost() method and calls Run() on it straight away. However, you can assign a value returned from BuildWebHost(), resolve any services that you require from it and then call Run() on it. The WebHost object has a property Services with a method GetService(). However, in this context, the method works slightly differently from how it was described previously. Assuming that the variable that you have built IWebHost object into is called webHost, to resolve the service that you need, you do the following:
To see how these concepts fit together, you can download a sample console app project from here.