[mod-users] Requesting overview of the software stack
ytaibt at gmail.com
Fri Apr 8 18:08:29 UTC 2016
Thanks! See inline.
On Fri, Apr 8, 2016 at 2:44 AM, Harry van Haaren <harryhaaren at gmail.com>
> On Fri, Apr 8, 2016 at 5:15 AM, Ytai Ben-Tsvi <ytaibt at gmail.com> wrote:
>> Hey guys,
> I'll answer some of this, just the parts I'm familiar with.
>> Any chance one of you can provide (or refer me to) a high-level
>> description of the entire software stack of the MOD, starting at the OS all
>> the way to the application? This shouldn't be longer than a simple diagram,
>> perhaps with links to where the source code can be found in the MOD github
> OS -> ALSA (Sound drivers) -> JACKd ("sound server") -> MOD host.
Would be nice to have some pointers here to where the code actually lives
as well as basic internal structure of the MOD host application.
> MOD Host -> Web Socket -> HTML/CSS/JS -> Browser
> Perhaps one of the MOD team will jump in here with details on the UI side
> of things - I'm not quite a web-developer ;)
+1 one for having feedback from the team, preferably as a short wiki page.
>> And a specific question: are you using vanilla Linux or did you apply any
>> realtime patch to the kernel?
> I think the MOD runs on the sunxi kernel - this is common with ARM
> embedded platforms that are not (yet) upstreamed to the mainline linux
>> If the former, how do you guarantee realtime performance?
> We must discuss details first, between hard realtime and soft-realtime.
> Hard realtime systems (RTOS-es, for use in robotics, automotive, etc) need
> verifiable proof of RT-safety.
> Soft realtime systems (audio use-case, and others) involves that the
> systems should be optimized for RT safety, but it is not gauranteed. No
> "High level" audio software manufacturer gaurantees RT safety - its not
> possible without an RTOS (or Xen, or Xenomai... etc). The point i'm trying
> to make is that no standard Windows, OsX or Linux system is hard real-time
> safe. Apologies if you knew this already :)
Agreed. But if you leave safety-critical applications aside, there are
still things that can be done with Linux to make it more realtime. The
vanilla kernel is *fair*, that is, it will make sure every thread gets a
fair amount of runtime according to its priority. This is *not* what you
want for realtime applications. There are patches to the kernel (I believe
RT-preempt is a popular one) that will change the scheduling policy such
that there is no fairness - high priority tasks always have precedence.
This allows you to build a system where the audio thread is never preempted
by something that is less critical, like UI.
Having the kernel itself support this is of course not sufficient, the
application itself needs to avoid non-realtime constructs (e.g. memory
allocation) on the critical path, should probably pin cache pages used for
the audio processing to avoid jitter in memory access. And of course, some
device drivers might be realtime hostile if they spend too much time in
interrupt routines or critical sections.
>> I get the feeling that the XRUN problems are likely a result of improper
>> task scheduling and will never go away completely unless the entire device
>> is treated as realtime system. The fact that those XRUNs are sporadic makes
>> me suspect that there might be something wrong with the scheduling, if it
>> ever allows anything significant (as opposed to quick interrupts) other
>> than the audio to preempt the audio.
> Yes you're right - scheduling performance, preemption and rtprio are the
> first points to investigate. I know the MOD team are aware of this - and
> have done work in that area already. I'm not sure of the current status of
> RT investigations.
> I've documented (some|all) of my knowledge on the topic of tuning systems
> for RT audio performance here, and will be in Berlin for the miniLAC this
> week, so hopefully can spend some time with the folks at MOD to investigate
Very nice writeup! You've clearly spent good time digging into this and
actually solving some of the critical problems.
Coming from an MCU/RTOS background (avionics, robotics), using complicated
hardware and software architectures in a realtime context always makes me
feel a little uncomfortable. The trial-and-error nature of the process
never really convinces you that you've covered all the corner cases, so you
end up believing that you've reduced the probability of blowing deadlines
to very very low, but not quite to zero, and you end up deliberately
under-utilizing the processing capabilities in order to hide any
deviations. So it's hard to imagine that with this setup you'll get a
glitchless operation with the CPU at %90.
But OTOH, I definitely see the advantages of a free and mature POSIX system
that has a great existing eco-system around it for audio-processing
applications, so I think this was overall a good choice.
>> I have some experience in this field and would be happy to look over some
>> code or consult, if you feel that this could help.
> Great - I will keep that in mind, and keep you in the loop :)
> Cheers, -Harry
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the users