Quantcast

how to implement auto-correction of sample rate in flow graph ?

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

how to implement auto-correction of sample rate in flow graph ?

Artem Pisarenko
I have external device streaming data at fixed sample rate and my GNU Radio source block for this device, which feeds flow graph containing some hardware sink operating at same sample rate (e.g. audio sink). My source-device and that sink-device (e.g. soundcard on PC) aren't synchronized, i.e. they clocked from different sources. So I need to do some kind of auto-resampling to eliminate buffer over/under-runs. (Please correct me, if I use wrong terminology. I'm novice and just trying to specify common DSP task, which I'm sure exists.)
The first trouble is that I don't know how to make my block adopt to existing sample rate in graph. GNU Radio scheduler asks my block to produce too large chunks of samples which leads to quick underrun of my block's internal buffer (even 500ms pre-buffering is not enough!). Actually there are no failure, taken data just being moved to somewhere between my block and sink (and everything works well when source and sink are synchronous, I checked). But(!) my block loses prebuffered level which deviations I expected to use as reference for corrections. Hungry scheduler eats all data in buffer. Moreover, his extra attempts leads to extra CPU usage - this is the second trouble. I experimented with inserting thread 'sleep's in work() function and it solves issue but breaks data flow (audio device reports underruns, more or less frequent depending on sleep interval).
Please, advice on solution. I would appreciate for just pointing me to right direction.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Artem Pisarenko
Oops, seems I'm misunderstanded scheduler behavior. Since each block works in its separate thread, I just need do blocking in work() function and adjust max number of output items in my source block ? But there are still problem with input data jitter. How much time work() function is allowed to block for ?
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Artem Pisarenko
After several days of hard work I figured out what's wrong, and finally realized how this task should be solved properly. I generalized it in such way, that it could solve similar problems experienced by anyone who connecting multiple hardware clocked blocks in single graph (aka, "The two-clock problem", as named here: http://lists.gnu.org/archive/html/discuss-gnuradio/2010-05/msg00117.html).
My solution introduces interface enhancement for sources and sinks and, as such, requires modification of existing blocks in GNU Radio source tree. So I would like to get feedback from maintainers before I start to prepare my contribution.

Basic idea is simple:
1. Source/sink estimates its current buffer level and buffer size each work() invocation and reports values outside via asynchronous message interface.
2. Special block inserted between source and sink receives messages from them, joins values in single pseudo-buffer, filters jitter and implements control loop: calculates deviation from 50% and corrects samples stream by adding duplicated samples or removing samples depending on deviation sign.

Pros:
- It works automatically and doesn't depend on clocks skew value, sample rate and other resampling blocks in graph.
- It doesn't depend on (unknown) intermediate buffers underlying in OS or buffers between blocks, they affect only stream start-up performance, and this can be tuned via block configuration. For example, my virtual machine adds "invisible" ~250ms audio buffer/latency (actually it can be seen from large difference between ALSA buffer size value and value returned by snd_pcm_delay()).
- Special block may be tuned (or implemented as different blocks) to adapt for various clock skew profiles, jitter profiles, etc.
- Unlike timestamp-based solutions it robust and flexible enough to support various sources and sinks.

Cons:
- It distorts signal heavily at large skew between clocks.

I wonder why nobody did it already.
Please, comment on.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Marcus Müller
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Artem,

your citing that thread of 2010 clearly shows you still didn't get the
problem. Point being: If your source has a different speed than your
sink, you'll need resampling in Software. This has nothing to do with
synchronizing. I won't further elaborate on this.

On 13.12.2013 09:24, Artem Pisarenko wrote:
> I wonder why nobody did it already. Please, comment on.

Because: GNU Radio is still a sample based software radio framework.
You don't want added or removed  samples. As a GR user, you can safely
assume that two samples are always 1/f_sample apart. That is the way
meaningful DSP becomes possible. I don't see any usefulness in your
proposal.

If there are two streams of different sampling clocks to be
synchronized, chances are high you want to have rational resampling
rather than padding/dropping. The latter just wreaks havoc on every
aspect of the signal. Don't do it unless you care for the matching
number of samples in two streams more than for the signal represented.

If your two sampling clocks diverge over time, well that's a hardware
problem that can hardly be solved in software if the error exceeds the
size of default buffers. With usable hardware, this should not happen.

Blocks shouldn't care for their own buffer "fill"; that's the job of
the runtime and GR does this fairly well, to be honest. Your Virtual
machine audio related problem is hardly a proof of shortcomings of
this concept.

Greetings,

Marcus
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSqs8eAAoJEAFxB7BbsDrL1mAH/2MEQVHO4gHCkAlxSYSL3ERR
lyo6JwalRUC8lGlyk7BTkrA009DDG7t7L4RAkkXlR7J/IyOEgcp3NRl9FxdmxpqT
DISTXcgQoMbCCK2D8CLbopFs5TsJIPDz/wmsL+a6KMW0AtrBV3pBgvs+X1Plr0u0
Bygk6nKJWbi142Y/FwVYx7/3Ev52x6aKvYBiLPXdchJt8VhnuXgMj8tyyOUnmtVF
j0wAeUzKTaaBMWhKYo9RmHx5Ls/TayfU66vAzwN2XYn981Y5sFWqZkQ0C3y4PRH2
3p2bxjoF1JvtOcuR9RyHoj2bHBhyVEPXgViGmEq9WuS8niacqHuMw4ftLcCA9sU=
=QGe4
-----END PGP SIGNATURE-----

_______________________________________________
Discuss-gnuradio mailing list
[hidden email]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Artem Pisarenko
Hi Marcus,

your citing that thread of 2010 clearly shows you still didn't get the
problem. Point being: If your source has a different speed than your
sink, you'll need resampling in Software. This has nothing to do with
synchronizing. I won't further elaborate on this.
I don't understand what I said wrong again ? This is exactly what my proposal is - resampling. I know, that problem is in hardware. And I know that ideal solution is synchronization (again, in hardware, of course). My proposal is just alternative cheap solution for those cases, when synchronization is not possbile (for whatever reasons).

Blocks shouldn't care for their own buffer "fill"; that's the job of
the runtime and GR does this fairly well, to be honest. Your Virtual
machine audio related problem is hardly a proof of shortcomings of
this concept.
You misunderstanded me again. I talked about hardware I/O buffers. I mentioned GR buffers only because they are part of total buffers chain from source hardware to sink hardware and, as such, they affect data propagation. But my solution doesn't rely on them in any way.

My proposal just solves classic problem, when source and sink have same sample rate nominally (e.g. 48000) but actual rates are varying (e.g. 47999 vs 48001) due to temperature ranges and drifting. If your source and sink have totally different rates then, of course, you should use rational resampler available in standard blocks collection of gnu radio, but additionally you may insert my block which will finally make speeds match precisely. (Of course, it's not strictly true, but I hope you understand what I mean.)
Why don't I want add/remove samples ? Any resampler actually does it.
Final purpose of this solution is just eliminate buffer over/under-runs when streaming data between non-synchronized hardware. If you like watch disturbing 'aU'/'aO'/'uU'/'uO' output in console (btw, it streamed to stderr, so it considered to be errors) and found my proposal useless, I have nothing to say here. In my case there are large latency, so I have additional bonus to console output: streaming interruption for a long duration in order to prebuffer and restore streaming.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Sylvain Munaut-2
In reply to this post by Marcus Müller
Hi,


> Blocks shouldn't care for their own buffer "fill"; that's the job of
> the runtime and GR does this fairly well, to be honest. Your Virtual
> machine audio related problem is hardly a proof of shortcomings of
> this concept.

I don't agree with this.

Blocks should be able to monitor the buffer level to act in consequence.

A real world example :
 - RF front end gets samples
 - Demod process to compressed audio frames
 - Sends to codec to convert to audio
 - Sends to audio card for playback

Now in this system you have two physical unrelated clock source; The
RF sampler and your audio card.
All the samplerate relationship in the flow graph would be adjusted
for their nominal speed but of course they won't be in perfect sync.

Now if your audio card is a bit faster, it will run out of samples
from time to time. It might replay automatically the last one and you
might not hear it.
But if your audio card is a bit slow, the buffer in front of it will
slowly fill up and you'll have a rising latency (and finally drop
samples when it fills up).

Now a real world vocoder I'm working with right now deals with this
problem by having variable length audio frames. When I get one frame
of encoded audio, I can ask the codec to generate between 19.5 ms to
20.5 ms of audio (for a nominal 20ms frame) and it's of course much
better than dropping/repeating samples blindly.

But to be able to use this, the codec block needs to be able to
monitor the output buffer level so it can try to maintain it at a good
level.


Cheers,

     Sylvain

_______________________________________________
Discuss-gnuradio mailing list
[hidden email]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Marcus Müller
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Sylvain,

Ok, I see the point here and I agree that it would be nice if a block
was notified of a "filling output buffer chain" in advance. (you
could, however, use the noutput_items parameter or nitems_written()
together with some system clock to get an idea of what is being consumed)

However, you explained the problem very nicely: Your hardware does
something strange, e.g. repeating samples, when it goes too slow/fast.
The only way to actually account for that is the monitoring of the
sink sampling clocks relative to the source sampling clock. However,
that's something you can't do in software, you need some hardware
feedback to give you a reasonably exact estimate of the frequency
offset, since you don't see the output of your hardware sink (usually).
For different soundcards there might (and might not) exist some method
to find out how fast a playback buffer got used up; this is very
hardware-dependent and not easily unified in software, especially
since in this case timing offsets in the order of microseconds are
relevant and if your feedback mechanism incorporates context switches,
async messaging and computation, you won't be getting reliable results.

Basically: If your hardware is "bad" by means of drifting/wrong
clocks, and you can't monitor your output, you'll be having a bad day.
This is an inherent problem of SDR, I reckon, and from my point of
view the only solution would be constraining hardware clocks of sinks
and sources using a common master clock (that's what most SDR
peripherals do, therefore), or monitoring the sink output using the
same source clock (which is just another incarnation of controlling
the sink clock using the source clock).
In software architectures like GR that rely on "large" buffers to
enable normal operating systems to run the SDR, I guess you will have
a hard time figuring out actual clock deviations just by measuring
buffer filling; there's just so much more than just the sink clock
going astray that could happen which will make things look like
latency was building up. Again: Using GR you usually try to ignore the
fact that computation may (and will) take different amounts of time at
different points in time, even for the same block, and rely on the
scheduler to call the linked blocks' work()s with fitting parameters.
Measuring the amount of data that passes through software buffers
won't therefore help you much when measuring the sampling speed of a
hardware sink, or only on very large time scales.

Greetings,
Marcus
On 13.12.2013 11:48, Sylvain Munaut wrote:

> Hi,
>
>
>> Blocks shouldn't care for their own buffer "fill"; that's the job
>> of the runtime and GR does this fairly well, to be honest. Your
>> Virtual machine audio related problem is hardly a proof of
>> shortcomings of this concept.
>
> I don't agree with this.
>
> Blocks should be able to monitor the buffer level to act in
> consequence.
>
> A real world example : - RF front end gets samples - Demod process
> to compressed audio frames - Sends to codec to convert to audio -
> Sends to audio card for playback
>
> Now in this system you have two physical unrelated clock source;
> The RF sampler and your audio card. All the samplerate relationship
> in the flow graph would be adjusted for their nominal speed but of
> course they won't be in perfect sync.
>
> Now if your audio card is a bit faster, it will run out of samples
> from time to time. It might replay automatically the last one and
> you might not hear it. But if your audio card is a bit slow, the
> buffer in front of it will slowly fill up and you'll have a rising
> latency (and finally drop samples when it fills up).
>
> Now a real world vocoder I'm working with right now deals with
> this problem by having variable length audio frames. When I get one
> frame of encoded audio, I can ask the codec to generate between
> 19.5 ms to 20.5 ms of audio (for a nominal 20ms frame) and it's of
> course much better than dropping/repeating samples blindly.
>
> But to be able to use this, the codec block needs to be able to
> monitor the output buffer level so it can try to maintain it at a
> good level.
>
>
> Cheers,
>
> Sylvain
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSqvhOAAoJEAFxB7BbsDrLTvQH+wQi8ayhT5H4pXCTjxMgHkZR
gSvbZAW6eZ5TLl0RSDyWtdyXwjSlNiH669r1q8i7zaq6H32395SjwkYYPyuAumRU
N4qFS76AcvZLcCCuzuvC6fPFlbu2RTFtGINC9Buy9K2W2jwiOWLMYwKJ6Idenl8t
vjKB83Epn/vue3iIclvyijgMm6uF0M/+SUS8AFe2SAmPwKc3tHZquwnodiPTIE+q
sLykZfcirU8Ws9ibb3tReRW5pEi8b8Lt5rLJHtLJmUouiyqWcvpghypUy5MIbGin
pTlxH8eFVaqPqANC+vwEzV6lOjDz11i7YRAUDwq7Ex7U4YVXqU46/tLL95LA9r0=
=j4Q3
-----END PGP SIGNATURE-----

_______________________________________________
Discuss-gnuradio mailing list
[hidden email]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Artem Pisarenko
Hi Marcus,

Everything you said I'm already thought about during last days and finally came to conclusion, that buffer estimation model is the only reliable (it solves issue directly by its nature) and universal one (many drivers provide required functions in their API). Since it involves control loop (i.e. have feedback), it will compensate all factors you mentioned on large time scales. Of course, its parameters should be tuned to given system to maintain it in stable state, at least. No matter, how many intermediate buffers exists between hardware source endpoint and hardware sink endpoint (including GR scheduler ones) and how they handled by system. All of them linked in one chain. Ideally we should combine all of them in single one large buffer and estimate its filling to get almost precise results (influenced by jitter only). In practice, software API allows estimate only some of them, but this is not problem. It just adds unknown offset of "buffer fallthrough" appearing at start-up (after source block prebuffered 50% and started streaming). Control loop will detect sudden underrun and will start injecting additional samples in data stream. Finally these "unknown" buffers will be filled up and system achieve stable state (finish of start-up phase). How long it will take ? Depends on these buffers sizes and filter coefficient of control loop algorithm. It's also possible to tune it with user-defined parameters (filter coefficient and "buffer offset") to get fastest start-up at given system.

Sylvain Munaut-2 wrote
Now a real world vocoder I'm working with right now deals with this
problem by having variable length audio frames. When I get one frame
of encoded audio, I can ask the codec to generate between 19.5 ms to
20.5 ms of audio (for a nominal 20ms frame) and it's of course much
better than dropping/repeating samples blindly.
Hi,
Agree, but it's not critical now. Since these corrections expected to be infrequent, they will not break signal very much. (In audio signal they even aren't audible to human ear.) I'm going to get system to work first, leaving tunings at very end.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to implement auto-correction of sample rate in flow graph ?

Artem Pisarenko
I've completed my contribution. It modifies several GNU Radio in-tree packages.

Expectations:
As described here: http://gnuradio.4.n7.nabble.com/how-to-implement-auto-correction-of-sample-rate-in-flow-graph-tp45268p45337.html with difference about correction method (see result).
- sources and sinks interface enchancement, they provide output of their current buffer measurements (consists of relative buffer filling and maybe some other metadata) via asynchronous message port "sync_out".
- addition of special resampler block, it accepts messages at its "sync_in" port and performs auto-correction of resampler ratio. Block configuration is well-defined, operation is indpendent of graph structure, hardware, data stream dynamic and jitter, etc.

Result:
Due to lack of my knowledge in some areas and disappointing gnu radio limitations, I wasn't able to make it properly and resulted in limited implementation. Also I've used fractional resampler block as base for design, so correction performed properly (instead of "blind" adding/deleting samples, as supposed to be initially). Implementation has bad performance characteristics and obscure configuration (some parameters are determined experimentally), but it works.
I modified audio sink block from gr-audio (only alsa implementation) as example of hardware block implementing required interface.

See attached GRC graph as an example of usage. It implements audio loopback and therefore doesn't introduce real sample rate skew, it just shows how it works.

No critics about design accepted. (I asked for help beforehand, nobody advised.)

Patch for GNU Radio tree
Example graph
Loading...