Discussion:
Mac mini sound woes
(too old to reply)
Benjamin Herrenschmidt
2005-03-27 23:44:03 UTC
Permalink
Hi Takashi !

I'm looking into adding proper sound support for the Mac Mini. The
problem is that from what I've seen (Apple driver is only partially
opensource nowadays it seems, and the latest darwin drop is both
incomplete and doesn't build), that beast only has a fixed function D->A
converter, no HW volume control.

It seems that Apple's driver has an in-kernel framework for doing volume
control, mixing, and other horrors right in the kernel, in temporary
buffers, just before they get DMA'ed (gack !)

I want to avoid something like that. How "friendly" would Alsa be to
drivers that don't have any HW volume control capability ? Does typical
userland libraries provide software processing volume control ? Do you
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?

Ben.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Andrea Arcangeli
2005-03-28 01:45:05 UTC
Permalink
Post by Benjamin Herrenschmidt
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
I recall to have sometime clicked on volume controls that weren't
hardware related, I don't pay much attention when stuff works, perhaps
it was the kde sound system doing it or something like that.

I would suggest doing the D->A only, then adding a basic hack to
g5 too ;), and then go back to the mini to do the gain emulation in
kernel space if somebody complains ;). Doing the software emulation
sounds quite orthogonal to the rest so it can be done later if needed.

Too loud sound is better than no sound.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Benjamin Herrenschmidt
2005-03-28 02:37:27 UTC
Permalink
Post by Andrea Arcangeli
Post by Benjamin Herrenschmidt
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
I recall to have sometime clicked on volume controls that weren't
hardware related, I don't pay much attention when stuff works, perhaps
it was the kde sound system doing it or something like that.
I would suggest doing the D->A only, then adding a basic hack to
g5 too ;), and then go back to the mini to do the gain emulation in
kernel space if somebody complains ;). Doing the software emulation
sounds quite orthogonal to the rest so it can be done later if needed.
Too loud sound is better than no sound.
Will do, of course. As for the G5, yes, I need to work on that too.

Ben.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Benjamin Herrenschmidt
2005-03-29 03:45:48 UTC
Permalink
Post by Benjamin Herrenschmidt
It seems that Apple's driver has an in-kernel framework for doing volume
control, mixing, and other horrors right in the kernel, in temporary
buffers, just before they get DMA'ed (gack !)
I want to avoid something like that. How "friendly" would Alsa be to
drivers that don't have any HW volume control capability ? Does typical
userland libraries provide software processing volume control ? Do you
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
alsa-lib handles both mixing (dmix plugin) and volume control (softvol
plugin) in software for codecs like this that don't do it in hardware.
Since Windows does mixing and volume control in the kernel (ugh) it's
increasingly common to find devices that cannot do these. You don't
need to handle it in the driver at all.
Yah, OS X does it in the kernel too lately ... at least Apple drivers
are doing it, it's not a "common" lib. They also split treble/bass that
way when you have an iSub plugged on USB and using the machine internal
speakers for treble.
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
Ok.

Ben.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Lee Revell
2005-03-29 03:38:59 UTC
Permalink
Post by Benjamin Herrenschmidt
It seems that Apple's driver has an in-kernel framework for doing volume
control, mixing, and other horrors right in the kernel, in temporary
buffers, just before they get DMA'ed (gack !)
I want to avoid something like that. How "friendly" would Alsa be to
drivers that don't have any HW volume control capability ? Does typical
userland libraries provide software processing volume control ? Do you
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
alsa-lib handles both mixing (dmix plugin) and volume control (softvol
plugin) in software for codecs like this that don't do it in hardware.
Since Windows does mixing and volume control in the kernel (ugh) it's
increasingly common to find devices that cannot do these. You don't
need to handle it in the driver at all.

dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Benjamin Herrenschmidt
2005-03-29 08:30:43 UTC
Permalink
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
Instead of the lame claims on how ugly it is to do hardware mixing in
Well, we are claiming _and_ obviously proposing a solution ;)
1. Where do you have true "real-time" under linux? Kernel or user space?
That's bullshit. you don't need "true" real time for the mixing/volume
processing in most cases. I've been doing sound drivers on various
platforms who don't have anything that look like true realtime neither
and beleive, it works. Besides, if doing it linux shows latency
problems, let's just fix them.
2. Where would you put the firmware for an DSP? Far away or as near to
hardware as possible?
Yes. This point is moot. The firmware is somewhere in your filesystem
and obtained with the request_firmware() interface, that has nothing to
do in the kernel. If it's really small, it might be ok to stuff it in
the kernel. But anyway, this point is totally unrelated to the statement
you are replying to.
3. How do you synchronize devices on non real time system?
I'm not sure I understand what you mean here. I suppose it's about
propagation of clock sources, which is traditionally done in the slave
way; that is the producer (whatever it is, mixer, app, ...) is "sucked"
by the lowest level at a given rate, the sample count beeing the
timestamp, variable sample size having other means (and less precise of
course) to synchronize.
4. Why the hell do we have whole network protocols inside the kernel?
Couldn't those
be perfectly handled in user space? Or maybe there are good reasons?
Network protocol do very few computation on the data in the packets
(well, except for IPsec for security reasons mostly) but this is a gain
totally unrelated. Like comparing apples and pears.
5. Should a driver just basically map the hardware to the user space or
shouldn't
it perhaps provide abstraction from the actual hardware implementing it?
This is in no way incompatible with having the mixing and volume control
in userspace. It's actually quite a good idea to have a userland library
that isolates you from the low level "raw" kernel intefaces of the
driver, and in the case of sound, provides you with the means to setup
codec chains, mixing components, etc...
6. Is there really a conceptual difference between a DSP+CPU+driver and
just
looking at the MMX IP core of the CPU as an DSP?
Again, I don't see how this makes any point in the context of the
discussion above and your heated reply.

Ben.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Marcin Dalecki
2005-03-29 09:32:06 UTC
Permalink
Post by Benjamin Herrenschmidt
Well, we are claiming _and_ obviously proposing a solution ;)
I beg to differ.
Post by Benjamin Herrenschmidt
1. Where do you have true "real-time" under linux? Kernel or user space?
That's bullshit.
Wait a moment...
Post by Benjamin Herrenschmidt
you don't need "true" real time for the mixing/volume
processing in most cases.
Yeah! Give me a break: *Most cases*. Playing sound and video is
paramount for requiring asserted timing. Isn't that a property
RT is defined by?
Post by Benjamin Herrenschmidt
I've been doing sound drivers on various
platforms who don't have anything that look like true realtime neither
and beleive, it works. Besides, if doing it linux shows latency
problems, let's just fix them.
Perhaps as an exercise you could fix the jerky mouse movements on
Linux - too? I would be very glad to see the mouse, which has truly
modest
RT requirements, to start to behave the way it's supposed to do.
And yes I expect it to still move smoothly when doing "make -j100
world".
Post by Benjamin Herrenschmidt
2. Where would you put the firmware for an DSP? Far away or as near to
hardware as possible?
Yes. This point is moot. The firmware is somewhere in your filesystem
and obtained with the request_firmware() interface, that has nothing to
do in the kernel. If it's really small, it might be ok to stuff it in
the kernel. But anyway, this point is totally unrelated to the
statement
you are replying to.
No. You didn't get it. I'm taking the view that mixing sound is simply
a task you would typically love to make a DSP firmware do.
However providing a DSP for sound processing at 44kHZ on the same
PCB as an 1GHZ CPU is a ridiculous waste of resources. Thus most
hardware
vendors out there decided to use the main CPU instead. Thus the
"firmware"
is simply running on the main CPU now. Now where should it go? I'm
convinced
that its better to put it near the hardware in the whole stack. You
think
it's best to put it far away and to invent artificial synchronization
problems between different applications putting data down to the
same hardware device.
Post by Benjamin Herrenschmidt
3. How do you synchronize devices on non real time system?
I'm not sure I understand what you mean here. I suppose it's about
propagation of clock sources, which is traditionally done in the slave
way; that is the producer (whatever it is, mixer, app, ...) is "sucked"
by the lowest level at a given rate, the sample count beeing the
timestamp, variable sample size having other means (and less precise of
course) to synchronize.
No I'm simply taking the view that most of the time it's not only a
single
application which will feed the sound output. And quite frequently you
have
to synchronize even with video output.
Post by Benjamin Herrenschmidt
4. Why the hell do we have whole network protocols inside the kernel?
Couldn't those
be perfectly handled in user space? Or maybe there are good reasons?
Network protocol do very few computation on the data in the packets
(well, except for IPsec for security reasons mostly) but this is a gain
totally unrelated. Like comparing apples and pears.
No it's not that far away. The same constraints which did lead most
people
to move TCP in to the kernel basically apply to sound output.
It's just a data stream those days after all.
Post by Benjamin Herrenschmidt
5. Should a driver just basically map the hardware to the user space or
shouldn't
it perhaps provide abstraction from the actual hardware implementing it?
This is in no way incompatible with having the mixing and volume control
in userspace. It's actually quite a good idea to have a userland library
that isolates you from the low level "raw" kernel intefaces of the
driver, and in the case of sound, provides you with the means to setup
codec chains, mixing components, etc...
It is not. At least every other OS out there with significant care for
sound did came to a different conclusion.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Takashi Iwai
2005-03-29 10:30:41 UTC
Permalink
At Tue, 29 Mar 2005 11:22:07 +0200,
Post by Marcin Dalecki
Post by Benjamin Herrenschmidt
Well, we are claiming _and_ obviously proposing a solution ;)
I beg to differ.
Post by Benjamin Herrenschmidt
1. Where do you have true "real-time" under linux? Kernel or user space?
That's bullshit.
Wait a moment...
Post by Benjamin Herrenschmidt
you don't need "true" real time for the mixing/volume
processing in most cases.
Yeah! Give me a break: *Most cases*. Playing sound and video is
paramount for requiring asserted timing. Isn't that a property
RT is defined by?
No, still you don't need "true" real-time OS.
(More exactly, do we have a "true" RT OS? :)
Post by Marcin Dalecki
Post by Benjamin Herrenschmidt
I've been doing sound drivers on various
platforms who don't have anything that look like true realtime neither
and beleive, it works. Besides, if doing it linux shows latency
problems, let's just fix them.
Perhaps as an exercise you could fix the jerky mouse movements on
Linux - too? I would be very glad to see the mouse, which has truly
modest
RT requirements, to start to behave the way it's supposed to do.
And yes I expect it to still move smoothly when doing "make -j100
world".
On the contrary, doing the soft-mixing/volume in kernel is the source
of latency when schedule isn't done properly without preemption.
Post by Marcin Dalecki
Post by Benjamin Herrenschmidt
2. Where would you put the firmware for an DSP? Far away or as near to
hardware as possible?
Yes. This point is moot. The firmware is somewhere in your filesystem
and obtained with the request_firmware() interface, that has nothing to
do in the kernel. If it's really small, it might be ok to stuff it in
the kernel. But anyway, this point is totally unrelated to the statement
you are replying to.
No. You didn't get it. I'm taking the view that mixing sound is simply
a task you would typically love to make a DSP firmware do.
However providing a DSP for sound processing at 44kHZ on the same
PCB as an 1GHZ CPU is a ridiculous waste of resources. Thus most
hardware
vendors out there decided to use the main CPU instead. Thus the
"firmware"
is simply running on the main CPU now. Now where should it go? I'm
convinced
that its better to put it near the hardware in the whole stack.
I don't understand this logic...
Post by Marcin Dalecki
You
think
it's best to put it far away and to invent artificial synchronization
problems between different applications putting data down to the
same hardware device.
Post by Benjamin Herrenschmidt
3. How do you synchronize devices on non real time system?
I'm not sure I understand what you mean here. I suppose it's about
propagation of clock sources, which is traditionally done in the slave
way; that is the producer (whatever it is, mixer, app, ...) is "sucked"
by the lowest level at a given rate, the sample count beeing the
timestamp, variable sample size having other means (and less precise of
course) to synchronize.
No I'm simply taking the view that most of the time it's not only a
single
application which will feed the sound output. And quite frequently you
have
to synchronize even with video output.
Hmm, how is this related to the topic whether a job is done in user or
kernel space...?
Post by Marcin Dalecki
Post by Benjamin Herrenschmidt
4. Why the hell do we have whole network protocols inside the kernel?
Couldn't those
be perfectly handled in user space? Or maybe there are good reasons?
Network protocol do very few computation on the data in the packets
(well, except for IPsec for security reasons mostly) but this is a gain
totally unrelated. Like comparing apples and pears.
No it's not that far away. The same constraints which did lead most
people
to move TCP in to the kernel basically apply to sound output.
It's just a data stream those days after all.
It depends on the efficiency, too. And, if you think of efficiency,
user-space has a big gain that it can use SIMD operations.
Post by Marcin Dalecki
Post by Benjamin Herrenschmidt
5. Should a driver just basically map the hardware to the user space or
shouldn't
it perhaps provide abstraction from the actual hardware implementing it?
This is in no way incompatible with having the mixing and volume control
in userspace. It's actually quite a good idea to have a userland library
that isolates you from the low level "raw" kernel intefaces of the
driver, and in the case of sound, provides you with the means to setup
codec chains, mixing components, etc...
It is not. At least every other OS out there with significant care for
sound did came to a different conclusion.
ALSA provides the "driver" feature in user-space because it's more
flexible, more efficient and safer than doing in kernel. It's
transparent from apps perspective. It really doesn't matter whether
it's in kernel or user space.

I think your misunderstanding is that you beliieve user-space can't do
RT. It's wrong. See JACK (jackit.sf.net), for example.

Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Marcin Dalecki
2005-03-29 08:11:12 UTC
Permalink
Post by Benjamin Herrenschmidt
It seems that Apple's driver has an in-kernel framework for doing volume
control, mixing, and other horrors right in the kernel, in temporary
buffers, just before they get DMA'ed (gack !)
I want to avoid something like that. How "friendly" would Alsa be to
drivers that don't have any HW volume control capability ? Does typical
userland libraries provide software processing volume control ? Do you
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
alsa-lib handles both mixing (dmix plugin) and volume control (softvol
plugin) in software for codecs like this that don't do it in hardware.
Since Windows does mixing and volume control in the kernel (ugh) it's
increasingly common to find devices that cannot do these. You don't
need to handle it in the driver at all.
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
Instead of the lame claims on how ugly it is to do hardware mixing in
kernel space the ALSA fans should ask them self the following questions:

1. Where do you have true "real-time" under linux? Kernel or user space?
2. Where would you put the firmware for an DSP? Far away or as near to
hardware as possible?
3. How do you synchronize devices on non real time system?
4. Why the hell do we have whole network protocols inside the kernel?
Couldn't those
be perfectly handled in user space? Or maybe there are good reasons?
5. Should a driver just basically map the hardware to the user space or
shouldn't
it perhaps provide abstraction from the actual hardware implementing it?
6. Is there really a conceptual difference between a DSP+CPU+driver and
just
looking at the MMX IP core of the CPU as an DSP?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Takashi Iwai
2005-03-29 10:05:30 UTC
Permalink
At Mon, 28 Mar 2005 22:36:09 -0500,
Post by Benjamin Herrenschmidt
It seems that Apple's driver has an in-kernel framework for doing volume
control, mixing, and other horrors right in the kernel, in temporary
buffers, just before they get DMA'ed (gack !)
I want to avoid something like that. How "friendly" would Alsa be to
drivers that don't have any HW volume control capability ? Does typical
userland libraries provide software processing volume control ? Do you
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
alsa-lib handles both mixing (dmix plugin) and volume control (softvol
plugin) in software for codecs like this that don't do it in hardware.
Since Windows does mixing and volume control in the kernel (ugh) it's
increasingly common to find devices that cannot do these. You don't
need to handle it in the driver at all.
Yes.
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
dmix currently doesn't work on PPC well but I'll fix it soon later.
If it's confirmed to work, we can set dmix/softvol plugins for default
of snd-powermac driver configuration. Hopefully this will be finished
before 1.0.9 final.

Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Benjamin Herrenschmidt
2005-03-29 11:10:10 UTC
Permalink
Post by Takashi Iwai
Yes.
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
dmix currently doesn't work on PPC well but I'll fix it soon later.
If it's confirmed to work, we can set dmix/softvol plugins for default
of snd-powermac driver configuration. Hopefully this will be finished
before 1.0.9 final.
Can the driver advertize in some way what it can do ? depending on the
machine we are running on, it will or will not be able to do HW volume
control... You probably don't want to use softvol in the former case...

dmix by default would be nice though :)

Ben.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Takashi Iwai
2005-03-29 12:20:16 UTC
Permalink
At Tue, 29 Mar 2005 21:04:50 +1000,
Post by Benjamin Herrenschmidt
Post by Takashi Iwai
Yes.
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
dmix currently doesn't work on PPC well but I'll fix it soon later.
If it's confirmed to work, we can set dmix/softvol plugins for default
of snd-powermac driver configuration. Hopefully this will be finished
before 1.0.9 final.
Can the driver advertize in some way what it can do ? depending on the
machine we are running on, it will or will not be able to do HW volume
control... You probably don't want to use softvol in the former case...
Add the following to ~/.asoundrc (or /etc/asound.conf for systemwise)

pcm.softvol {
type softvol
slave.pcm {
type hw
card 0
device 0
}
control {
name "PCM Playback Volume"
card 0
}
}

Then you can use the PCM "softvol", e.g.

% aplay -Dsoftvol foo.wav

This will create "PCM" volume control if it doesn't exist, and do
volume attenuation in software. If the control already exists (in the
driver), the software volume is skipped automatically.
The newly created volume can be saved/restored via alsactl.

In addition, you can override the ALSA default PCM by defining
~/.asound like:

pcm.!default "softvol"
Post by Benjamin Herrenschmidt
dmix by default would be nice though :)
Yeah, in future version, they will be set as default, i.e. without
extra definition of ~/.asoundrc.

Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Lee Revell
2005-03-29 19:07:47 UTC
Permalink
Post by Benjamin Herrenschmidt
Can the driver advertize in some way what it can do ? depending on the
machine we are running on, it will or will not be able to do HW volume
control... You probably don't want to use softvol in the former case...
dmix by default would be nice though :)
No, there's still no way to ask the driver whether hardware mixing is
supported yet. It's come up on alsa-devel before. Patches are welcome.

dmix by default would not be nice as users who have sound cards that can
do hardware mixing would be annoyed. However, in the upcoming 1.0.9
release softvol will be used by default for all the mobo chipsets.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Takashi Iwai
2005-03-29 19:37:37 UTC
Permalink
At Tue, 29 Mar 2005 14:05:08 -0500,
Post by Lee Revell
Post by Benjamin Herrenschmidt
Can the driver advertize in some way what it can do ? depending on the
machine we are running on, it will or will not be able to do HW volume
control... You probably don't want to use softvol in the former case...
dmix by default would be nice though :)
No, there's still no way to ask the driver whether hardware mixing is
supported yet. It's come up on alsa-devel before. Patches are welcome.
Well I don't remember the discussion thread on alsa-devel about this,
but it's a good idea that alsa-lib checks the capability of hw-mixing
and apples dmix only if necessary. (In the case of softvol, it can
check the existence of hw control by itself, though.)

Currently, dmix is enabled per driver-type base. That is, dmix is set
to default in each driver's configuration which is known to have no hw
mixing functionality.
Post by Lee Revell
dmix by default would not be nice as users who have sound cards that can
do hardware mixing would be annoyed. However, in the upcoming 1.0.9
release softvol will be used by default for all the mobo chipsets.
On 1.0.9, dmix will be default, too, for most of mobo drivers.

Takashi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-------------------------------------------------------------------------------
Achtung: diese Newsgruppe ist eine unidirektional gegatete Mailingliste.
Antworten nur per Mail an die im Reply-To-Header angegebene Adresse.
Fragen zum Gateway -> ***@inka.de.
-------------------------------------------------------------------------------
Loading...