L'octidi 28 messidor, an CCXXIII, Martin G. McCormick a écrit : > What replaces the standard sound device?
In short: ALSA. In long: the kernel devices for ALSA are present in /dev/snd/, but applications are not supposed to access them directly, they are supposed to solely rely on the API exposed by the ALSA library, libasound. This library uses that to offer features that we would not like to be implemented in kernel space, including virtual sound devices that do not map to any kernel device. The most useful features of libasound are the plug and dmix plugins: plug converts any format provided by the application to a format supported by the underlying device; dmix allows sharing the same hardware device between several applications without a server; plus, of course, user configuration. If you are using Linux, unless you are using the third-party (IIRC, partially proprietary) OSS drivers (and you would know it), then you are using ALSA and /dev/dsp is almost completely useless. Other Libre operating systems still use the OSS API. > I have written > some experimental programs that play and record sound using > /dev/dsp and they work. Obviously, there is a lot of bad design > in the world that works and I hear the discussion that says that > /dev/dsp is out-dated so what is considered the best way to send > and receive audio in a program? As I said: libasound. Or, of course, any higher-level library that wraps libasound others in a common API. > If there is no /dev/dsp, programs such as mplayer and > the experimental applications I have written will be dead in the > water which is why I am asking these questions now. MPlayer, like all reasonably large current Libre projects, has support or libasound along with other sound API (OSS, of course, SDL, Jack, Sun, win32, etc.). Some program do not. These are mostly crappy old proprietary software, like skype, but as you explain, personal programs without enough manpower are in that case too. For them, various emulation layers exist. The oldest one belongs in the kernel, it exposes /dev/dsp devices, backed by the ALSA drivers. Using it loses most benefits from ALSA: no user configuration, no plugins. The second one is aoss (package alsa-oss): it uses shared library magic to capture access to /dev/dsp and emulates them with libasound. Therefore it has all the features of ALSA, but a few drawbacks, most notably not working with statically-linked binaries. There may also be another similar solution using ptrace(). > The > experimental applications I have written are sound-activated > recording and sound delays. In gcc, ioctl is what one uses to > open /dev/dsp with specific characteristics such as sample rate > and size. I hope what replaces it isn't too indirected. There > are times when it is nice to explain what is happening in one > breath. I know that isn't always possible but it is nice when > you can. The libasound API is more verbose than OSS, but not much more complex. You can have a look at the ALSA device implementation in FFmpeg (libavdevice) for a reasonably simple and self-contained example. Regards, -- Nicolas George -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20150716124325.ga2416...@phare.normalesup.org