Ticket #372 (new bug)

Opened 5 years ago

Last modified 4 years ago

Focusrite Saffire Pro 40 Midi out/playback issue

Reported by: gkhouri Assigned to:
Priority: major Milestone:
Component: devices/dice Version: FFADO 2.1.0
Keywords: Midi playback Cc: nils
The device the bug applies to: Focusrite Saffire Pro 40


Some Pro40 users trying to play back midi are having the same problem, described on the general forum: <http://www.ffado.org/?q=node/862#comment-12538>. The symptoms are of a syncronization of data flow between the Pro40 and the midi hardware during playback. when sending midi data from the Pro40 to the midi device, some notes play, some don't, some sustain, and rhythmic data is lost. Recording midi data is fine - the notes show as the correct pitches and rhythmic values . Playback of the same midi data recorded in Ardour 3.x using the Pro 40, on, e.g., an edirol fs-66, works fine.

Change History

09/18/13 00:44:24 changed by gkhouri

  • device_name set to Focusrite Saffire Pro 40.

09/18/13 17:33:23 changed by jwoithe

  • owner changed.
  • component changed from devices/bebob to devices/dice.

My immediate reaction is that MIDI data is being supplied to the device too fast. We have seen similar symptoms for this reason with other interfaces.

What happens is that MIDI data is sent by the software much faster than a hardware MIDI interface can handle. If the interface does not buffer the MIDI data in order to slow the transmission rate then FFADO needs to do this itself. An example of this is in the MOTU driver, where a circular buffer is used to ensure that MIDI bytes are only sent to the audio interface at a pace that the interface can handle.

Since recording MIDI data works fine with the Pro40 we know that our understanding of the underlying MIDI protocol with the Pro40 is correct. This gives weight to the hypothesis that FFADO needs to send MIDI data to the Pro40 at a rate that's slower than what is presently done. Determining what this rate is will require experimentation, but the line rate of MIDI would be a reasonable first guess.

Implementing this is probably not difficult but will require physical access to an affected device. I am therefore not in a position to do this myself. A hardware MIDI device would also be useful. Perhaps Phil would be in a position to look into this.

Note that the Pro40 is a DICE device (not a BeBoB interface) so I've updated component of this ticket accordingly.

(follow-up: ↓ 4 ) 09/18/13 18:01:25 changed by jwoithe

Further to comment:2, I've just seen some forum posts related to this subject and in particular a followup from Phil. So it seems he's not in a position to look into this right now. Hmm. I guess I could hack up something along the lines I put into the MOTU driver, but debugging it would be awkward since I don't have any Saffire devices. A better approach might be to try to create a suitable standalone buffering object which could then be used by interfaces which require it. I'll think about this.

The MIDI buffer I referred to was added to the MOTU driver in r990 with further refinements in r991. This could be used as a reference as to what might be needed in the more general case.

(in reply to: ↑ 3 ) 09/20/13 15:43:57 changed by gkhouri

Replying to jwoithe:

Let me know if I can help in testing.

03/21/14 07:40:02 changed by nils

  • cc set to nils.

I have 2 Saffire Pro40 interfaces and an e-piano and the same problem. I think that I grasp what is done in r990, r991 with regard to MIDI buffering, but also agree that this functionality should be moved out of device-specific code.

Is there a way I can help with this? I'm not overly familiar with how ffado code is structured (I've only mucked around with ffado-mixer and some python code so far), but find my way around C/C++ usually.

(follow-up: ↓ 8 ) 03/22/14 02:53:24 changed by jwoithe

As I implied in comment:4, the work required really needs to be done by someone with an affected device and the means to test anything that is done to remedy the problem. I would be very grateful if you were able to take a look at the problem and come up with a possible solution. While a more generic solution is obviously the best option, I would not be opposed to something which simply made MIDI on the Pro40 work correctly. I would be more than happy to answer questions about the code and the way it's structured, although keep in mind that because I don't have any of the Saffire devices my familiarity with the DICE code is somewhat limited. If time allows Phil would be best placed to answering the trickier questions assuming he has time to do so. To that end it may be best to post such questions to the ffado-devel list.

A final thing to keep in mind is the new in-kernel streaming driver for DICE devices that Takashi is working on. This seems to be maturing nicely and if you have the chance it might be worth testing his latest git repository to see how it performs in your situation. There are posts in ffado-devel in relation to this. If you wish to take a look at this development code and have difficulty locating it please let me know and I'll see what I can dig up.

03/22/14 04:20:06 changed by jwoithe

A correction to my comment:6. Takashi's kernel streaming driver work is for BeBoB and Fireworks devices, not DICE interfaces. So scratch that suggestion. For some reason I often interchange DICE and Fireworks; apologies for any confusion caused. Thanks to Jano Svitok for pointing out my error.

(in reply to: ↑ 6 ) 04/01/14 01:25:28 changed by gkhouri

Replying to jwoithe: I have a Saffire Pro 40 and I've done a lot of coding in a past life. I'd like to take you up on your offer to answer a few questions about the code structure. I don't have the time to try to completely understand the whole cadre of code and calls for libffado and the build. I have some questions about the big picture for starters: It looks like the Pro40 i/o is currently handled by the code in /libffado-2.1.0/src/libstreaming/amdtp/*. Does a new dir for the Pro40 need to be created in src/libstreaming as a copy of the amdtp files, renamed apropriately for new Pro40 classes(what might that be?), and the files within the new dir given appropriate names, similar to what you've done for the MOTU? Does scons automatically detect these and include them in the build? It looks like processWriteBlock() and encodeMidiPorts() would be the methods to start with. I couldn't easily find this info. I'm happy to email directly and/or post to the dev list. Thanks.

(follow-up: ↓ 10 ) 04/01/14 04:23:54 changed by jwoithe

gkhouri: thanks for your offer to look into this. I'll try to answer your questions.

The Saffire Pro40 is, as you might have guessed, a DICE-based device. The control code is in the SaffirePro?40 object as defined in the saffire_pro40 files under src/dice/focusrite/. Being a DICE interface, the Pro40 uses AMDTP for streaming audio data to and from the device. The classes which implement the AMDTP streaming are in src/libstreaming/amdtp/ as you suspected.

The problem which needs addressing involves the speed of MIDI data and the fact that software can send MIDI data to the interface far faster than can be sent on the MIDI bus (see also comment:2). I do not believe that this is unique to the Pro40 as the situation will occur whenever you have software MIDI (with an unlimited speed) interfacing to a physical MIDI port which is limited to 31.25 kbps. For this reason any solution should probably not be implemented in a device-specific way. Instead, we need a way for the AMDTP layer to ensure that MIDI data is not sent to the firewire device too quickly.

I encountered a similar problem with the MOTU interfaces. While the streaming protocol had the capacity to send MIDI data at a fast rate, limited buffering in the MOTU caused MIDI bytes to be dropped. While the average MIDI rate produced by the software was within the MIDI bandwidth, the burst rate for a particular message or group of messages could easily exceed the line bandwidth. To resolve this I implemented a simple ringbuffer between the MIDI data flow from software and that which was sent to the hardware. You can see this in the encodePortToMotuMidiEvents() method of the MotuTransmitStreamProcessor? class. First up, any MIDI bytes which are present in the software MIDI data stream are stored at the head of the midibuffer buffer. If an overflow occurs the oldest bytes are dumped (this is not optimal because it will effectively corrupt MIDI messages, but it's easy and the buffer is sized large enough to make this unlikely). Following this the data to be sent to the MOTU is constructed; a MIDI byte is copied from the tail of the midibuffer buffer into the firewire packet so long as sufficient time has elapsed since the last such action so as to maintain the required 31.25 kbps rate to the MOTU interface. The timing is by a simple countdown timer called midi_lock. The midi_tx_period value is calculated based on the audio sampling rate, and is effectively the number of audio samples needed between successive MIDI bytes to achieve a 31.25 kbps rate.

I imagine something similar could be arranged for the Pro40 and other AMDTP devices. While you could obviously proceed along the lines you suggested (defining a new class to handle data for the DICE devices generally or the Pro40 specifically, using amdtp-oxford as a template) I don't think this will be needed. What we require is some code similar to that outlined above which works to ensure that MIDI data isn't transmitted to the audio interface at too fast a rate by the AMDTP framework. This would form a part of the AmdtpTransmitStreamProcessor? class. The encodeMidiPorts method of the AmdtpTransmitStreamProcessor? is possibly where this work needs to be done. Interestingly enough there is a comment in the code which notes that one possible reason for not being able to send a MIDI byte is due to rate limiting.

What is not clear to me right now is how the existing code implements rate limiting - whether there's some magic elsewhere which tries to keep the data flow within the port buffer within limits, whether there is an assumption that other software will do this, or something else entirely different. In other words, what exactly causes the high byte of the software MIDI stream buffer ("buffer") to be set to 0xff. A related question is what happens to MIDI data if the rate is too fast. I expect it gets dropped which might be the reason for the problems which people have noted.

My simplistic thought is that an implementation of a ringbuffer will be needed to perform a similar task in encodeMidiPorts to that which I do in the MOTU driver. That is, data from *buffer will be placed in the ringbuffer, and data from the ring buffer will only be used to populate *target_event if sufficient time has elapsed since the last MIDI data byte. However, this is just a guess based on a quick reading of the code. Furthermore, I don't have any AMDTP MIDI-enabled interfaces, so I can't test any of these theories.

As noted I don't think you'll have to define an entirely new class for this. I expect the work is equally applicable to all AMDTP devices which have MIDI functionality and therefore the base AMDTP class is where the new code should go. If it turns out you do need a unique class, I expect you would follow the lead of the libstreaming/amdtp-oxford streaming objects and subclass the AMDTP objects (in particular, AmdtpTransmitStreamProcessor? in your case). Since the vast majority of the AMDTP functionality required is as implemented in the base AMDTP objects, duplicating all that code would be unnecessary. In this respect the approach to take would be closer to libstreaming/amdtp-oxford rather than libstreaming/motu.

For reference, scons will not automatically pick up new files or directories in the source tree. You need to tell scons about them in an applicable SConscript file. For the streaming objects (and most of the FFADO source) this is src/SConscript. In that file you'll notice definitions for "oxford_source", "motu_source" and so on which illustrate the basic idea behind inclusion of source files in the build. When the respective platform is enabled these are added to the "source" variable which ultimately causes them to be built.

I hope the above helps. Please let us know if you have further questions. For more in-depth discussions the ffado-devel mailing list might be more convenient than this ticket, but I'm happy to stick with the ticket if you find it more convenient.

(in reply to: ↑ 9 ) 04/02/14 00:01:16 changed by gkhouri

jwoithe: Thanks for your time and details - your reply is very helpful. I had been thinking of changing encodeMidiPorts() in AmdtpTransmitStreamProcessor?, at least for starters/my purposes. When I get it working, perhaps someone can help determine if the code is generic enough. I'll probably start using the mailing list after this. It'll be some time before I actually start coding and testing as my performance schedule is pretty hectic currently.

04/02/14 05:03:08 changed by jwoithe

gkhouri: I'm looking forward to what you might come up with as time allows. Feel free to post patches to the ffado-devel mailing list asking for comments or feedback. If you check out the source code using subversion (svn) you can obtain a patch in the form of a "diff" output by running "svn diff" from the top-level FFADO source directory. This is a standard way to express the changes you have introduced to the source tree since checkout.

04/02/14 05:33:04 changed by nils

gkhouri: I also want to work at this as soon as time permits (I'll be on vacation from the weekend on, for two weeks), perhaps we can team up then. I'll probably use git-svn for my local copy of the source code which will allow to carry forward (rebase) any changes when the ffado SVN repository gets updated. If you're familiar with git, we could put a (regularly rebased) feature branch for this on github or similar, until it's ready to be included in official sources. I'm also fine with continuing this discussion on the ML.