As great as .NET Core is for writing software for multiple platforms, it lacks some basic capabilities. This applies to those functionalities that would work radically differently on different platforms under the hood, especially the ones that weren’t the priority for the authors of .NET Core. One of these is the ability to play sound.
With it’s predecessor, .NET Framework, you could play sound easily by using classes like SoundPlayer from the standard class library of the framework itself. Likewise, NuGet packages such as NAudio added many extra audio-processing capabilities.
Unfortunately, NAudio is not fully compatible with .NET Framework yet. Although there is a pre-release version that is available in .NET Standard and is compatible with .NET Core, the bulk of its functionality is still only available on Windows.
There is, of course, a reliable way to play sound on .NET Core on any platform, but it requires quite a few additional dependencies. You will have to load ASP.NET Core components, Node.js and use NodeServicesto get this solution to work.
Therefore, if you would want to be able to play audio on .NET Core in the most efficient way without loading too many additional components, you can write your own library to do so. Fortunately, the process is not prohibitively difficult.
In this post, we will focus on how to get the base library structure set up and how to play audio if it’s installed on Windows. In the following posts, we will introduce Linux and Mac OS compatibilities as well.
Set ting up the project structure
We will want our code to be reusable in other .NET Core programs. Therefore, the project type that we are building is a .NET Standard class library.

The only projects that can be made in .NET Standard are of the types that can be compiled only into reference libraries and not the ones that are compiled into executables. Therefore, if you are building an .EXE file, you can only have it as either as .NET Framework or .NET Core. Class libraries, however, can use .NET Standard. This allows them to be compiled into the binaries compatible with either of the frameworks.
Once the project is created, we will need to organize it into an optimal structure for a library with audio playback capabilities. The first thing we will do is create an interface with common audio player methods. As each operating system has a different way of playing audio, there will be one implementation of this interface per OS. Each of those classes will be internal, as the code that will use the library shouldn’t be concerned with which implementation to use. This decision will be handled by the library itself.
The structure of our interface is as follows:
namespace NetCoreAudio.Interfaces { public interface IPlayer { Task Play(string fileName); Task Pause(); Task Resume(); Task Stop(); } }
Now, we can add Windows-specific implementation of the interface.
Send commands to MCI on Windows
To play audio on Windows, we will need to use winmm.dll, a native library from the OS, which forms the core part of Windows Multimedia API. The only part of this DLL that we are interested in is the method that can sends audio-specific commands to the API. And, in order to use it, we need to link to the DLL from our code.
This is achieved by DLLImport attribute, which allows us to create a function in our managed C# code that maps directly to a method within the native DLL that we are linking to.
The function we are interested in is mciSendString. And, in order to use it, we will need to have the following in our class that implements IPlayer interface mentioned previously:
[DllImport("winmm.dll")] private static extern long mciSendString(string command, StringBuilder stringReturn, int returnLength, IntPtr hwndCallback);
The function has several parameters that allow us to finely control the execution of commands and receive the callbacks. However, for the demonstration purposes, all commands that we will send to this functions will will be in a simple “fire and forget” manner. Therefore, except for the command text itself, we will set all other parameters to nulls, zeroes and null pointers. If you need something more sophisticated, the detailed documentation is available here.
The function that will receive Windows Multimedia API command and execute them will be as follows:
private void ExecuteMsiCommand(string commandString) { var result = mciSendString(commandString, null, 0, IntPtr.Zero); if (result != 0) { throw new Exception($"Error executing MSI command. Error code: {result}"); } }
mciSendStringf unction call inside of it returns 0 if the command has executed successfully. Any value other than zero is an error code. This is why we are throwing an exception.
Now, on to the commands that need to be sent to the function. The command defined below are just standard C# strings. Each of these commands is case-insensitive. However, I have used some capital letters for readability.
First, lets examine a potential implementation of Play() method of IPlayer interface. In our example, we will call ExecuteMsiCommand() function defined earlier three times.
We will be using the same audio device for all of our operations; therefore, before we start the playback, we will need to close any other media being played by the same device. This can be achieved by sending the following command to ExecuteMsiCommand() function:
Close All
After this, we will need to open a media device and give it a reference to a file that we want to play. It will also be given alias that will be usable in the commands executed afterwards. “myDevice” can be any alias of your choice. The “fileName” parameter either represents an absolute path to the audio file in Windows format (e.g. C:\Temp\audio.mp3) or a path relative to the folder where the application resides.
Open {fileName} Type MPEGVideo Alias myDevice
We assume that interpolated string is used to insert fileName parameter into the command, hence the curly brackets.
Next, we will actually play the file, which is done as follows:
Play myDevice
Pause and resume commands consist of the corresponding verb followed by the device alias, so it will be “Pause myDevice” and “Resume myDevice” respectively. For stopping the playback, either “Close” or “Stop” command can be used followed by the device alias.
This documentation explains how device alias works. this reference page contains the full list of the commands that can be used.
The good news is that, with playback type being set to MPEGVideo, the commands sent to mciSendString function can play MP3 and WAV formats, as well as any other format that can be used by MPEG videos.
Something to look out for
As I have been testing the library, I have discovered that “Open” MCI command doesn’t work very well under some scenarios; inside a multi-threaded application is one of them. In those cases, it will return an error code 266, which is normally returned when a correct codec could not be found on the system. When this happens, audio device would not be opened; therefore any attempt to execute any other command against the device alias will fail.
However, there is a work-around for that. It is possible to run most of the MCI commands directly against a file name instead of alias. So, if “Open” command is throwing 266 and you are pulling your hair trying to find out why, you can skip the command and use the following one to start the playback:
Play {fileName}
One thing to note though. “Close” command doesn’t work with file name. So, if there is no device alias available, you can perform the following command to stop the playback:
Stop {fileName}
And follow it up with this command:
Close All
Wrapping up
A code repository that uses the principles shown in this short tutorial, NetCoreAudio, is available here. It also comes with a console app that can be used to see the class library in action.
This is it for part one. In the next article, I will demonstrate Linux implementation of IPlayer interface.