MIDI export: the first track should not contain note events

• May 26, 2017 - 21:46
P2 - Medium
S5 - Suggestion
won't fix

Musescore prepares type 1 (multi-track) MIDI files incorrectly.

1) Musescore assigns note events of the first Instrument (staff) to Track 0. This is wrong.

Track 0 is supposed to be a "conductor" track, which only includes the meta events: all meter and tempo changes. It is not supposed to include notes or other data. Instruments (staves) are supposed to be separated into tracks, starting with Track 1 (not Track 0).

2) Musescore needs to separate voices into separate channels per track. In other words, voices on the same staff should be assigned to separate MIDI channels in the same Track. Example: a bass clef which is the second clef in a grand staff should be Track 2, with tenor and bass voices assigned to channels 1 and 2 respectively.

The points above can be verified in the MIDI File specification, sections 2.2 and Appendix 2: http://www.cs.cmu.edu/~music/cmsip/readings/Standard-MIDI-file-format-u…

Thank you.


Can you be more specific about which language in those sections you are referring to? Reading it I don't see anything quite so specific as what you are saying. I'm not saying it isn't so, just that it isn't *obviously* so to me, and presumably wouldn't be to others. The fact that I don't recall anyone pointing this out before, and the fact that the MIDI files created by MuseScore are readable by most MIDI software, is what makes me doubt.

Hi Marc. The documentation could be clearer, but what I'm explaining is there. Appendix 2 is the most clear; see the example Type 1 file and notice that Track 0 has no note events. The reason why is not very clearly explained in section 2.2 which I also cited. Track 0 is supposed to be a "conductor" track (also called a "tempo map"), with no note events. You can also check MIDI files exported from any major music software to verify what I'm saying. The fact that nobody reported this doesn't mean anything. The files will still import fine into most applications, but not those that are strict about the format. I'm a composer and I also write MIDI software. I discovered this problem when I tried to import a MIDI file produced by Musescore into my software, MIDI Tapper ( http://hpi.zentral.zone/miditapper ). What Musescore is doing right now is definitely wrong.

I should have mentioned that there is nothing in the document cited concerning the channel assignments. Ideally Musescore should let the user assign MIDI channels to voices (as all other notation software does). I couldn't find that feature in Musescore. Without allowing users to choose the MIDI channels, the application should at least separate voices by channel automatically on export. The reason is that duplicated notes (for example a soprano and alto voice that momentarily are given the same note) cannot be handled properly by the importer without separating the data by channel.

I also notice Musescore includes a Meta event [FF 21 01 pp] at the start of each track - the unofficial "MIDI Port Selection" event. Why is that there? Unless users can assign MIDI output ports to tracks in Musescore, there is no reason to include this message in MIDI file exports. As it's not part of the MIDI spec, most software doesn't support this message anyway.

Should I be filing each of these observations separately? I've not done that so far because I'm assuming whoever will fix MIDI file exporting will want to take care of all this at once. Thanks.

I think best to keep seperate issues separate.

Regarding (1) having Track 0 be the "conductor" track with no note events, I have heard about this from other people too. I vaguely remember a discussion on the forums regarding a MuseScore midi export file that was having problems loading into Synthesia (I think) because MuseScore did not adhere to this "standard". I'm no midi expert...and to be honest I'm not sure that is exactly an official MIDI standard or if that has just been a defacto standard. I can't seem to find that issue. And nor have I been able to find definite midi file format standard, although I do find a lot of stuff online saying that Type 1 must have no note events in Track 0. But regardless, maybe it is a good idea to adhere to this "standard" regardless of whether it is official or not, so don't get complains from programs that adhere to it.

Regarding (2) to separate voices into separate channels per track, I'm not sure this is the most desirable, since voices are nothing more than separate voices. They aren't separate instruments, and they might not even be for separate musicians....for instance in the case of voices of a piano part, so I would lean to not putting voices on separate tracks. Having each voice on a separate channel might just complicate things, and make it harder for ordinary 99% use case where user wants all voices of same part to be synthesized the same. Again, I don't know of any definitive official standard...this is just my personal opinion.

The link I provided is a transcription of the official MIDI 1.0 Standard, which I also have as a PDF. I just checked it against the web link, again. The text and examples are the same. There are no note events on Track 0. It's part of the MIDI file standard, MuseScore does it wrong, and it should be fixed.

Regarding the other issues, they are not that important; better ignore them in this thread. I can't seem to edit the issue description at this point to get rid of the other info, but I'll submit other reports and separate my reports from now on. Thank you.

I added this to the MIDI Tapper documentation under the heading "Possible Import Parsing Issues":
"Type 1 MIDI files improperly containing note data on Track 0 (for example, .mid files created with MuseScore 2.1 or earlier) prompt an alert and are corrected on import."

I would have already fixed this issue in MuseScore myself (since it is an extremely simple issue that would take a few minutes to fix) but I've never participated in an open source project, don't use Git,, have no idea how it all works, etc. If I'm able to learn how to participate, obviously I'll stop bothering you and just fix whatever problems I find myself. Thanks.

Title .mid files do not conform to the MIDI standard: 2 problems .mid files do not conform to the MIDI standard

Thanks for the link, I'll see if I can figure it out.

I'm now on GitHub, and have forked MuseScore. I don't know who changed it, but I'd like to keep the priority of this issue "Major" the way I submitted it, since it fits the description "Common feature incorrectly/not functioning".

Status (old) needs info active

I see, it looks like anyone can change these things whenever a comment to the issue is made?


I would assign this to myself, but I'm only beginning the process of learning how to contribute, which appears to be a little more involved than I was led to believe. (So far, what was supposed to take 15 minutes took about an hour.)

Separate channels for staff voices would be incorrect for all keyboard instruments (not just piano), or instruments with one sounding-element per note (e.g., harp, and classical guitar, as noted in other threads). Separate channels for voices would be very useful when "flute 1 and flute 2 are on one staff". So that has to be an option. Even this, though, will not solve the extremely common case of short/long note coincidence on keyboard instruments, classical guitar, etc. It is not a matter of convenience or formal correctness, but sounding two channels at coincidence is simply incorrect; real pianos, harpsichords, and organs do not do that (although sometimes organ music intended to be rendered on two manuals (keyboards of the same instrument) is notated with independent voices on one staff, i.e., the "two flutes" case).

It is to be noted that mechanical-action (tracker) organs from hundreds of years before midi implement a policy on this -- when one manual is coupled to another, or the pedal coupled to a manual, the actual keys of the target depress when the source keys are depressed; that is, coincident shorter notes on the target are not re-sounded, but "silenced"/ignored.

"Separate channels for staff voices" -- this should really be moved to the other thread, but since you posted here, I'll respond here.

Yes, channel assignments should be user-controlled, and the default behaviour should be as it's working now, that is, all voices on the same staff default to the same MIDI channel.

But the user should be able to select the MIDI channel assigned to each voice.

"sounding two channels at coincidence is simply incorrect". No. Although it may be non-intuitive, "sounding two channels at coincidence" IS absolutely correct according to the MIDI file standard. I've already explained why this is so, and if you don't believe me, please read the MIDI spec. Separate voices in polyphony cannot be tracked properly in MIDI output unless they are assigned to separate channels.

Then real pianos, organs (voices on a single manual) and harpsichords "do not track polyphony properly". Real pianos do not sound twice as loud when one voice crosses another. Midi explicitly allows for more than one note to be sounding in a channel at the same time. This is precisely the way performance on keyboard instruments is represented, as well as what the result of transcripting a performance on a midi keyboard looks like, as can be verified with any midi-recording instrument or software. Midi keyboards create output for one channel, not one per note. MuseScore does not manage coincidence within channel properly; it is not about failure to use multiple channels.

There is a "keyboard model" and a "multiple instruments model" of what multiple voices on a staff mean, predating computation and midi, and MuseScore ought treat them differently. One midi channel is "one keyboard-model instrument". Right now MuseScore is "always keyboard model", but note coincidence within that model is not now handled properly.

Right, acoustic instruments don't follow this convention, of course. But this is not about real acoustic instruments. It's about MIDI output.

All commercial music notation software allows the user to assign MIDI channels to voices. MuseScore should do this too, and when it does, this problem of one voice cutting off another will vanish.

Consider that Bach wrote "Die Kunst der Fuge" as open score, voices / instruments unspecified, but most modern editions this work are written on a keyboard Grand Staff. Try the following:

1) Notate Bach's original in MuseScore. It will play back correctly. Output a MIDI file. It plays back correctly in other software. Why? Because the MIDI output is correctly separated into different channels.

2) Next notate the keyboard score version of the art of fugue in MuseScore, it will play back incorrectly. Output a MIDI file and open that in other software. It will also play back incorrectly there, because the MIDI data is incorrect.

In other software, the user can assign separate MIDI channels to the voices to solve this problem, and then it plays back correctly. I've been writing contrapuntal keyboard music for almost 30 years. I can't imagine not being able to assign MIDI channels to voices in notation software. It's an extremely basic necessary function.

I've been writing contrapuntal music for over 40 years (check out my profile). I work with midi keyboards (virtual pipe organ, and I write MIDI software for it) every day.

I agree completely that the ability to assign voices on a staff to separate channels is a fundamental one that MuseScore currently lacks, and one that would solve many problems and open many possibilities not now available. But it would not solve the single-instrument note coincidence problem. Please reread what I wrote about the two possible (and both necessary) models.

Were your Kunst der Fuge example to be rendered with separate "piano" channels for each voice on the grand staff, it would not play correctly, that is, note coincidences would sound like "two pianos". That is absolutely incorrect. The way the texture should sound on one piano is different than the way it would sound on four separate instruments, or four pianos each playing one voice. Single-keyboard reduction of polyphony involves notes occasionally disappearing, and this was true 300 years ago, 400, as well as now.

Midi is NOT a representation of a score, but a schedule for what notes are to be rendered upon the instrument(s). Channels represent instruments, not logical voices of polyphony, and in the case at hand, these two notions differ.

There is no reason why it should be possible to retrieve or reconstruct polyphony from a midi keyboard score. That is not its purpose. Its purpose is to produce the same sounds on a midi-controlled keyboard instrument as would a human performer.

Arguing about the use of channels in this way being correct or not is a very silly argument to have. The point is about correct MIDI output from MuseScore.

"I've been writing contrapuntal music for over 40 years."

I enjoyed listening to your very fine work "Piece d'Orgue" at https://musescore.com/user/1831606/scores/3889661

"But it would not solve the single-instrument note coincidence problem. Please reread what I wrote about the two possible (and both necessary) models."

I already read it. Look up a handful of Bach MIDI files and you'll find that multiple MIDI channels are used to represent the polyphony in Bach's keyboard music. Otherwise we would not be able to have correct MIDI files of contrapuntal keyboard music. Yes, this DOES result in unnatural doubling of notes when the playback instrument doesn't have some mechanism for handling the doubled notes, as most samplers don't, but some players do handle this correctly. For example pianoteq handles it properly (it's not a sampler). Kontakt player also handles it so that notes don't "stick out", (although its handling has other problems). Anyway, yes it is unnatural and does not correspond to the way acoustic instruments work, but it's necessary in order to correctly play back polyphonic music written on one staff. In my own software MIDI Tapper, I include an option users can check "mute doubled notes", exactly for this purpose, so that the doubled notes don't "stick out". But without channel separation, the MIDI data is not trackable, and that is the point for MuseScore.

Being "trackable" is not a goal of the "musical instrument digital interface". That is the purpose of Music xml, msc(z), and other score-representation formats. Nor is midi a reasonable technique for reducing a four-staff score to a single-instrument performance.

What do you think a midi-controllable pipe-organ (most new ones with electric action are) should do when confronted with your score in which voice-crossings on the same manual are given in two channels? Sprout new pipes in real time?

Polyphonic voices are "trackable" in MuseScore, and every other score editor, but not in keyboard midi files or midi recordings of keyboard performances. Saying "Bach Midi files" is deceptive: Staves don't exist in MIDI. Midi files in which polyphonic voices are separate channels, as in orchestral or choral score/performance, must exist as well as ones on which they are on one channel, as on a keyboard. There is no "correct" choice here for both cases -- one is for keyboards, and another for separate instruments.

"Mute doubled notes" is the whole deal. MuseScore ought have that as a per-staff checkbox, and that is all that is needed to solve the problem. The option of separate channels/instruments for each voice on a staff is also extremely desirable, but solves a different problem.

Thank you for your kind remarks on my recent composition (do check out older ones) -- you will note, as it were, that the last top-voice note is clobbered by a shorter note in the next voice as posted (and this is on Hauptwerk driven by (processed) MuseScore midi) (I haven't corrected it yet).

BSG, you've missed the point.

I don't know if you're also a computer programmer, but I am, and I write music software that deals with MIDI files, so I know what I'm talking about. What I've reported here, and what I've also tried to explain to you, is that the MIDI data output from MuseScore, in MIDI files will, in these cases involving doublings and notes of different lengths overlapping, result in MIDI data that *cannot be properly parsed* by any software importing that MIDI file. THAT is what I was reporting, and that is the problem. This is also demonstrably a problem in real time MIDI output from MuseScore to its internal synth, which will be solved once channel assignments are optional for the user. All other arguments about what MIDI channels are supposed to be used for, etc. are irrelevant.

Yes, I've been programming computers for 50+ years (40+ professionally), details of my career not relevant here, but writing MIDI-processing software (for myself, in Python these days) is about all I do since I acquired Hauptwerk a year ago, so I, too, know what I'm talking about.

"Importing MIDI files" is a kludge for reconstructing scores (and believe me, I've done it enough times) when you don't want to type them in over again. It is no substitute for a score representation. As I just added to the previous letter, the note coincidence problem troubles not only the internal synth, but other midi instruments (e.g., Hauptwerk) driven by MS midi output.

I think the goal of using MIDI as an exchange format in this day and age is misguided, and there is no reason why the representation of a keyboard reduction of polyphonic music should be optimized for this. Midi keyboards (and divisions of virtual pipe organs) listen to ONE channel, like all other instruments, not many. A keyboard-reduced score cannot at once serve the two goals of driving one keyboard instrument and comprising a "trackable" source for reconstruction of polyphony.

(if you missed my update to my previous posting, thanks for your kind remarks about my work, and note that the very last note of that composition has been hit by this problem, rendering on Hauptwerk).

Okay, I understand where you are coming from, but the goal is not MIDI as an "exchange format", although it does have to do with MIDI as a representation of a score.

Some of your argument has to do with the different ways MIDI is employed with regards to hardware versus software. I also design MIDI hardware, so I'm aware of these differences. On one hand you can have a MIDI-fied organ, where a manual is represented by one MIDI channel. On the other hand you can have a contrapuntal composition for organ where voices are assigned to separate channels. They are different paradigms for different purposes. Arguing for a hardware model in software doesn't make any sense ...

However, since you are working with Hauptwerk, then I can also see why you go in the direction of the hardware model, since Hauptwerk sticks to the MIDI organ hardware model, because they are primarily aimed at users who want to play a virtual pipe organ in real time. The needs are different.

For polyphonic composition with notation software, which is what is at issue here concerning MuseScore, the needs are as described in my initial post. What I've reported as a problem is exactly that, and I've explained the solution. (The N.B. would be that software like Hauptwerk wouldn't like the solution, since it uses a hardware input model, but that's no reason not to fix the problem in MuseScore.)

MIDI stands for "musical instrument digital interface": i.e., it is a "hardware model". "Representation of a score" to me means "exchange medium". MIDI files are designed to be performed, not imported. As a result, notes are shortened or otherwise articulated by their authors, enharmonic distinctions are completely discarded (as you have surely noticed), in all cases I know (despite the MIDI standard) lyrics and textual annotations and indications are lost, etc. It is a performance medium, not a score medium. The correct solution to the clipping problem is "mute duplications in same channel", as you yourself have implemented in your own software (and other systems I have heard of offer), not requiring authors of polyphonic keyboard music to direct their voices to distinct channels.

I urge you to look at the (wonderul) G# minor prelude of book II of the Well-Tempered Clavier, first few measures. Here is an indisputably polyphonic composition for keyboard (at least that's what the title of the collection alleges), which is not a fugue, and not a reduction from a multi-staff original. Can you tell me how many voices there are, which notes belong to which voice, and which channels they ought be assigned to, or why it is reasonable that someone ought be forced to make that determination in order to deal with this work in MuseScore? What do these chords and third/sixth doublings mean in your "one channel per note" model? Where lies the boundary between "polyphonic keyboard music" and "other keyboard music" in your model, and how can others discern it?

MIDI was invented as a hardware standard, before music software was a thing, sure. And uses for MIDI have developed over the decades, including import / export standards and expected norms for MIDI file exchange. You have an idea of how a MIDI file should be used - one idea among many; it's not the only way.

MIDI files can include lyrics and other text in meta events. Files made for karaoke machines do that, for example. Text is not necessarily lost. Other data can also be included. In fact, the standard allows for arbitrary bytes for whatever purpose the user wants to be stored in MIDI files.

There are two possible cases for doubled notes:
(1) same channel - notes get "cut off", MIDI files do not parse correctly
(2) different channels - notes "stick out", MIDI files parse correctly

We're talking about case (1) as the problem in MuseScore, since all notes on a staff are always on the same channel and that can't be changed by the user.

As a solution, you propose that all the data should remain on one channel. Okay, that's an interesting idea, and I don't know of any software that tries to handle this problem in that way. It would require altering the MIDI data by removing some MIDI events and possibly adding others. Not all cases are the same either; sometimes one voice "drops in on" or crosses another note held by another voice, sometimes they begin together, etc. I suspect that there are too many cases that could not all be handled properly, so the result would be error-prone. If you want to implement it, go for it, but I don't recommend it. If MuseScore did this, (implemented a note-muting function for voices on the same channel), that would be a first in music notation software as fas as I know.

Issue (1) above is a problem in all music notation and sequencing software that I have ever worked with. The solution to this problem in all music software I know of is to allow the user to assign channels to each voice. All I'm proposing is that MuseScore implement the solution to this problem that is implemented in all other standard MIDI software. Obviously assigning multiple channels to voices isn't desirable for all use cases, so it should be a user option as it is in all the other software.

This standard solution also of course leads to problem (2), which is better to have than problem (1), because at least with (2) MIDI files can be parsed correctly. And, handling case (2) is dead simple. This is what MIDI Tapper does, simply by sending the same MIDI data, but reducing the velocity value of all "muted" notes to 1. MIDI Tapper has it's own function for finding which notes should be muted, and the user can also arbitrarily select notes to mute or un-mute. MuseScore already allows users to set the velocity value of any note arbitrarily. That means that once the standard solution is implemented, the problem is solved for the majority of use cases.

Staves routing to one channel and staves routing voices to separate channels (latter not currently available in MuseScore) are two different cases with different sets of use-cases and different problems. One is not the solution to the other.

Staves routing to one channel, as single instruments, including keyboards, are properly represented, demand special care on the part of score-editing programs which, without taking such care, will generate (on)(on)(off)(off) for the same note. This is not a problem in the definition of MIDI, or in instruments, or the use of one channel. It is a problem in score-editing programs not knowing how to play polyphonic music on a keyboard the way men and women have for five hundred years. The sequence of on's and off's sent to an instrument should represent when you want the notes turned on and off, not running adjustments to the their instantaneous "polyphonic depth".

Staves routing individual voices to different channels would be a very wonderful MuseScore feature, and you say that other apps have it. The "double sounding" of a coincidence is not an error, or a problem, but exactly what you want if you have Flute I and Flute II (orchestral instruments) on the same staff, or a hymn-book staff with two parts on a staff, or even multiple violin-strings. This is exactly the correct and desirable behavior. But if you attempt to use this to render keyboard music, you will indeed create problems. One instrument per channel is the midi standard, isn't it? Staves routing individual voices to different channels is a necessary facility to represent multiple instruments (including multiple orchestral "sections") on the same staff. It is not a substitute for a midi generator knowing how to play polyphonic music on a keyboard instrument.

Saying "MIDI files parse correctly" is not objective language; it means "I can more easily recover the score from the MIDI file."

I have encountered the overlapped on/off problem myself. See mm. 33-34 of https://musescore.com/user/1831606/scores/1608616 . The correct solution is MuseScore learning to play such passages on a keyboard the way humans do.

(And while we're here, one staff routing ALL its notes to many channels, i.e., colla parte, is something I have wanted for years).

I have listened to a bit of your music in SoundCloud, by the way, and it is quite wonderful.

I would like to hear, as it were, some other voices here ...

Thanks for listening to some of my music, and I appreciate the nice compliment.

> "MIDI files parse correctly" is not objective language;
> it means "I can more easily recover the score from the MIDI file."

No, it means there are not weird, unpredictable results when parsing the file. A Note OFF message at x is expected to follow a Note ON message at x. Because of this, in a properly formed MIDI file, one can look at the data back to front in order to connect OFF with corresponding ON. When there are duplicated notes of different durations on the same channel, it is *impossible* to do this, to correctly connect OFF with ON, and you can end up getting very strange results, like notes held through the entire piece, notes that have a 1 ms duration, and so forth. The problem is sometimes not apparent to the listener, if the playback patch is not a sustaining sound, but in a graphic editor the parsing problems are visible plain as day.

Yes, I agree with this; sorry if I misunderstood your intent. Again, the missing element in the current stack is "keyboard technique for playing polyphonic music", 500-year old info, absent because the (to-)midi renderer sadly does not realize when it puts two digital fingers, as it were, on the same key as people do; it is not implicit in the (correct) use of one channel per keyboard or keyboard instrument.

But if you do what I say, MIDI files will "parse correctly" in this respect, but still will not produce the original score easily, just as with careful recording of the keyboard actions of a human performer.

Title .mid files do not conform to the MIDI standard In MIDI export, first track should not contain note event

I change the title to reflect that 1) should be fixed to mark this issue as fixed. 2) is totally different and should be address in another issue. Please try to create issue with a single focus.

Title In MIDI export, first track should not contain note event MIDI export: Track 0 should not contain note events

(I changed the title to make it clearer)

"2) is totally different and should be address in another issue. Please try to create issue with a single focus."

I already did create another thread for the second issue here, but someone marked it as a duplicate, even though my post gives a solution and none of the previous posts include the solution:


The solution to problem (2) is also given above.

The way you worded the other issue, it*is* a duplicate. The issue is that short notes cut off long ones on the same channel. There are multiple possible solutions. The original report focused on a different solution than the one you favor. Feel free to respond there in that original issue. But as I keep saying, it really seems like this needs to be discussed with users on the forum first.

"There are multiple possible solutions. The original report focused on a different solution than the one you favor. Feel free to respond there in that original issue. But as I keep saying, it really seems like this needs to be discussed with users on the forum first."

Marc, I'll post the following to the other thread, but let me outline it for you here also one last time (please humour me) because I honestly can't imagine why anyone would say this needs to be discussed. This is a typical rookie MIDI software problem, and there is a widely known industry standard MIDI software solution for the problem. To summarise:

MuseScore cuts off duplicated notes in different voices on the same staff. Why? Because all the notes are assigned to the same MIDI channel.

Allow users to assign MIDI channels to different voices.

There absolutely are not "multiple possible solutions" to this. Not viable solutions, anyway. There is already an industry-wide standard solution to this problem. All other music notation software supports this solution. They all allow the user to assign MIDI channels to voices regardless of where they appear on any given staff. MuseScore doesn't have this feature, and *the behaviour above is a direct result of lack of this feature*.

Yes, doubled notes from different channels _may_ "stick out" (it depends on the playback device), but that is a very well known issue to anyone who works with MIDI software, which also has a very well-known user workaround: simply change the velocity value of one of the doubled notes to 1. Problem solved. More verbiage does not need to be wasted on this issue. The solution simply needs to be implemented.

So, exactly as I said, there are multiple solutions. And some of the others *are* perfectly viable. And ideally, I think we should design a solution that both *allows* a user the *option* of having different channels for different voices if that suits his use case, but *also* givers the correct playback if he elects to use the same channel for different voices if that suits his use case better.

It is clear which use case you personally encounter, and thus which part of the solution you are focused on. No need to insult the people concerned with other aspects of this problem. I realize you are mostly accustomed to working solo on small projects where you get to determine the use cases you want to support as well as implement the solutions. But in a larger open source project like this - one with dozens of developers and a user base in the *millions*, we need to seek user input to understand the different use cases that we may need to support - often these are very different from our own - and to respect the opinions of our fellow developers and engage in meaningful dialog with them.

*As for "rookie MIDI software problem", let us know if you really want to compare our collective resumes to see which solution should "win". Personally, though, I'd rather actually discuss the technical issues objectively.

Marc, with respect, your responses have been oppositional from the start, wasting time and energy. In each of my posts I have clearly stated problems and given solutions. Your first response to my bug report was to challenge my understanding the MIDI standard, which only showed that you in fact do not know the MIDI standard. It's absurd. The end of the discussion on this thread is: hey, this guy was right, and we need to do exactly what he said. Duh. That's why I said it in the first place, and if you listen instead of challenging without knowledge, you'd reach the solution a lot faster. To repeat, when a problem identified by an informed and experienced MIDI programmer is brought to your attention by that person, you should listen instead of immediately reacting in opposition.

*As for "rookie MIDI software problem", let us know if you really want to compare our collective resumes to see which solution should "win". Personally, though, I'd rather actually discuss the technical issues objectively."

That is complete nonsense, and reveals nothing but your ego and small-mindedness. I have been talking about MIDI and how it is supposed to work in music notation software. I am not interested in your resume.

This is the very reason I have avoided these open source projects like the plague. Someone reports a problem, gives the solution. Show respect and admit when a basic tenet of the MIDI standard has been overlooked, such as no notes on Track 0 in a Type 1 MIDI file, or assigning channels to voices in order that MuseScore can create properly formed MIDI file output. These are extremely basic MIDI issues, hence the "rookie" comment. You can take it personally if you like, whine and complain and waste a lot of time, or you can read the MIDI spec and see that what I said to begin with was correct.

Asking questions for clarification is not being "oppositional". I am sorry that it seemed to upset you so much though. If you would like to objectively discuss the actual technical issues, please do so, but I ask you leave out the personal insults.

Marc, there has been absolutely no personal insult coming from me towards anyone here. You are the one suggesting that we "compare our collective resumes", which is unnecessary and absurd, and shows you are not focused on the problem and how to solve it. The problem and its solution were stated in my original post. That could have been the end of it, but no, there has to be this barrage of nonsense and wasted energy that follows, because it's the internet.

@.function_ as someone not involved with any of discussions you have been involved in, I've got to say the rudeness you have introduced yourself with in the MuseScore community has been painful to watch ever since you reported your problems with building the programme.

RobFog, what's painful to watch is a lot of programmers unable to accept to things like

- the program does something fundamentally incorrect here; here's what it needs to do, thank you
- the instructions on this webpage are wrong and out of date, please fix it

without pointlessly arguing and wasting a lot of time and energy.

Pointlessness is annoying. Your remark criticising my demeanour adds nothing to this discussion towards solving problems. The problems were stated clearly, with solutions, and should have been verified and fixed on the initial post, not followed by pointless blathering. Read my initial posts. Are they rude? Absolutely not. They are exactly to the point and spot on problems and solutions. What follows is pretty ridiculous.

The use of the passive voice ("the problems were stated with solutions") is irresponsible here. You stated the problems, and you proposed possible solutions. There is no agreement that your solutions solve the problems at hand, or are reasonable solutions. Disagreement, dispute and critique is not "pointless blathering". In fields other than mathematical proof (in particular, engineering) unilateral assertion of correctness comprises neither correctness nor consensus. There are plenty of people here who have done midi programming. You also say you are new to MuseScore ....

BSG, I provided obvious industry-wide-proven solutions to obvious well-known problems, and presented them on a silver platter as it were. I appealed immediately to MIDI standards, with a link. Experienced MIDI programmers should know these things already. They should be aware of the standards written in the documents and de facto resulting from industry-wide implementation. As I already said, go ahead and write the code that proves whatever it is you want to prove; going on like you are without anything to back it up is nothing but smoke and mirrors. I'm done with this thread. The moderator should mark it as spam at this point. The problem and solution stand in the initial post. I ask kindly that someone please take up the problem and fix it. Unfortunately that person won't be me because the instructions provided on the MuseScore website for developers are out of date and do not work (and that is simply a fact and not an insult) and at this point I just do not have time to try to figure it all out. I have my own software to maintain and updates to issue. I'm very grateful to everyone who works on MuseScore; it's a wonderful piece of software with vast potential. Thank you and goodnight.

Which part of the instructions are out of date? There might be some old instructions still accessible, however, if you find the latest compile instructions and follow them to the letter, then it should work. If not, then maybe let us know exactly what is wrong with the instructions so they can be fixed. (it is very easy to make a slight mistake that will cause compile to fail)

See https://musescore.org/en/node/208621. As far as I can tell the issue has to do with attempting to build 2.x branch on older / unsupported versions of macOS. Not really relevant here of course since all new development should be on 3.0, but probably trying to build *any* version of MuseScore on unsupported version of macOS will be problematic.

You need to implement refcounting (and then emitting only the very first start and the very last stop event for multiple notes) to fix the playback problem in the simple keyboard model.

This should be not too hard to do, perhaps even performantly.

Independently from that, the second change necessitated is the option (per staff) to split the voices into separate MIDI channels (or even an option to leave the complete assignment to channels up to the user), to allow for the multi-instrument model.

I don’t think restriking is feasible with MIDI and oooo00000OOOOh sounds, so people wishing restriking will probably want the multi-instrument model instead.

Thirdly, channel 0 apparently needs to be not assigned an instrument.

These are three unrelated changes which, combined, fix most of the problem, and which individually fix problems for a subset of users (the first for keyboard model users, the second for multi-instrument model users, the third (and the first, to a lesser amount) for stricter MIDI file parsing.

Ok, if that link was official MIDI standard ...

The Problem with that link is that it's not the official MIDI standard proper.
Form the pdf:
This document was originally distributed in text format by The International MIDI Association. I have updated it and added new Appendices.
© Copyright 1999 David Back.

You wrote:
Musescore assigns note events of the first Instrument (staff) to Track 0. This is wrong.

Track 0 is supposed to be a "conductor" track, which only includes the meta events: all meter and tempo changes. It is not supposed to include notes or other data. Instruments (staves) are supposed to be separated into tracks, starting with Track 1 (not Track 0).
The points above can be verified in the MIDI File specification, sections 2.2 and Appendix 2

as well as:
Track 0 is supposed to be a "conductor" track (also called a "tempo map"), with no note events.

I just double-checked with the original MIDI Specs (btw. the official standard is available at midi.org) and reassured that your reading is simply wrong: the standard mandates that all
timing related "meta" events shall be in the first track (this is a clever requirement since it
enables grouping notes into bars during the file parsing phase). Nowhere does it require
that the first track only contains meta events.

Still the wrong-ish post, but the one with the most information on the topic asides from the https://musescore.com/groups/3642106/discuss/3663846 group posting, so I’m presenting it here after realising BSG is right (thanks man!):

I just wrote a shell script that checks a score for note collisions. This is Unix(Linux)/OSX only, sorry Windows® users (well you might be lucky with Cygwin, but that counts as Unix in my book).

http://mirsolutions.de/music/resources/chkcoll.sh has the script, it’s published under the Ⓕ Copyfree MirOS Licence so everyone can benefit from it.

On Debian, you need the packages musescore (of course), perl (perl-base is probably enough), mksh, midicsv installed to be able to run it.

Other people will have to retrieve the appropriate packages themselves:
- perl is https://www.perl.org/
- mksh is http://www.mirbsd.org/mksh.htm by yours truly
- midicsv is http://www.fourmilab.ch/webtools/midicsv/
- I assume stuff like grep, sed, tr, … is available

The scipt will, unfortunately, not run headless because MuseScore is written in Qt5 and so, by Qt’s design limitation, requires a connection to an X11 display even when running in nographics mode (although xvfb probably suffices).

To use the script, run it like this:

$ mksh /path/to/chkcoll.sh path/to/file.mscx # or file.mscz or file.mid

To clean up afterwards:

$ mksh -c /path/to/chkcoll.sh path/to/file.mscx # or file.mscz or file.mid

page: 142 / 334

A format 1 representation of the file is slightly different.
First, its header chunk:
4D 54 68 64 MThd
00 00 00 06 chunk length
00 01 format 1
00 04 four tracks
00 60 96 per quarter-note
Then the track chunk for the time signature/tempo track. Its header, followed by the events:
4D 54 72 6B MTrk
00 00 00 14 chunk length (20)
Delta-time Event Comments
00 FF 58 04 04 02 18 08 time signature
00 FF 51 03 07 A1 20 tempo
83 00 FF 2F 00 end of track
Then, the track chunk for the first music track. The MIDI convention for note on/off running status is used in this example:
4D 54 72 6B MTrk
00 00 00 10 chunk length (16)
Delta-time Event Comments
00 C0 05
81 40 90 4C 20
81 40 4C 00 Running status: note on, vel = 0
00 FF 2F 00 end of track
Then, the track chunk for the second music track.
Then, the track chunk for the third music track.

The quote you give is from the programming examples section and describes how the music fragment given one page before can be encoded. As the text preceeding the score fragments says "... then, a format 1 file is shown with all data separated into four tracks: one for tempo and time signature, and three for the notes.".
But nowhere does it say that the first track has to be an exclusive meta event track.
The lack of clear terminology (OP calls the first track "conductor", the example section calls it
"time signature/tempo track") indicates that the standard doesn't limit that first track.
I'm not saying that it doesn't make sense to have such a special track, just that the wording of the standard doesn't doesn't mandate a meta-data exclusive first track.

I still have doubts that many of these documents that people are emphatically claiming are *the* midi standard are indeed part of the original midi standard. From Wikipedia article on midi:

MIDI technology was standardized in 1983 by a panel of music industry representatives, and is maintained by the MIDI Manufacturers Association (MMA). All official MIDI standards are jointly developed and published by the MMA in Los Angeles, California, US, and for Japan, the MIDI Committee of the Association of Musical Electronics Industry (AMEI) in Tokyo. In 2016, the MMA established The MIDI Association (TMA) to support a global community of people who work, play, or create with MIDI, establishing the www.MIDI.org website as the central repository of information about anything related to MIDI technology, from early MIDI technology to future developments.

So maybe someone needs to dig up the original 1983 standard. Maybe midi.org has it. I have a strong suspicion that the industry has made up their own defacto standard about file format which isn't necessary.

I'm still not convinced that the first track is prohibited from containing note events.

This is still minor considering that major is reserved for crashes or dataloss in musescore.

I vaguely recall reading that either only track 0 is used or that tracks 1–n are used while track 0 is used as a conductor track (newer, recommended). I don’t have a reference for this, though. Might be better to be conservative about what we send… so perhaps a change might be in order, even if minor.

Good news, I’m able to build MuseScore (2.1, but I can prepare a fix for the 2.x branch which someone can then forward-port to 3.x) and thus likely able to take care of the refcounting bug soonish.

page 134: (The original is a single paragraph, I put line-breaks for easy reading.)

In a MIDI system with a computer and a SMPTE synchronizer which uses Song Pointer and Timing Clock, tempo maps (which describe the tempo throughout the track, and may also include time signature information, so that the bar number may be derived) are generally created on the computer.

To use them with the synchronizer, it is necessary to transfer them from the computer.

To make it easy for the synchronizer to extract this data from a MIDI File, tempo information should always be stored in the first MTrk chunk.

For a format 0 file, the tempo will be scattered through the track, and the tempo map reader should ignore the intervening events; for a format 1 file, the tempo map must be stored as the first track.

It is polite to a tempo map reader to offer your user the ability to make a format 0 file with just the tempo, unless you can use format 1.

Severity S3 - Major

In an attempt to refocus this thread, I repeat:

In a Type 1 MIDI file, Track 0 is supposed to be a "tempo track" or "conductor track", having only meta events, no note events.

Putting notes on Track 0 is a basic mistake. To anyone who doubts this:

  1. Download any Type 1 MIDI file on the internet not created by MuseScore, and take a look at its data. You will see that Track 0 contains tempo events and no note events.
  2. Open any DAW or MIDI editing software and look at how to changes to Tempo information are made for a MIDI file. You will find that the tempo changes are applied to Track 0 only, that Track 0 contains no note data, and in most cases the user is prohibited from putting notes onto Track 0. For example, Logic X does not include Track 0 in the MIDI Events list; it is a separate list entirely reserved only for Tempo.

Arguing about whether this practice is clear in the MIDI Spec or not is a waste of time. This is the way MIDI files have been made for decades. It's the standard practice. MuseScore does it wrong.

NOTE: the status of this bug has been changed back to "major" because, as previously stated, this fits the canonical MuseScore description for a major bug.

Thank you.

Severity S3 - Major S4 - Minor

Again, we have agreed that yes, it would be wise to make track 0 contain no note events; but it is not a strict violation of the standard.
Resetting the priority to normal, as compared and balanced against other issues and MuseScore's core functionality (being a score engraver, not a sequencer).

Part of the "Not being a sequencer" responsibility is being able to transmit scores to "yes, I'm a sequencer indeed" tools.
On a very related subject, my own tools have noted that if the "real stuff", i.e., notes (which don't belong there) and time signatures in whatever-happens-to-be track 0 end sooner than the tick-end of the whole piece, the end-of-track element is time-tagged for the former, not the latter, requiring me to scan all tracks to determine how long the piece really is. This is surely wrong.

In reply to by jeetee

Severity S4 - Minor S3 - Major

jeetee, we don't agree on this. Saying it's not "a strict violation of the standard" is misleading. Putting notes on track 0 is wrong. It's a bug. It's a violation of the standard practice. These wrongly formed MIDI files output from MuseScore can't be read properly by other software that expects standard MIDI files to be properly formed. That is what matters. Whether or not this issue is clearly worded in the MIDI spec makes absolutely no difference.

Status changed back to "major", and please don't change it again. See:
Common feature incorrectly/not functioning

@.function_: Instead of argueing, how about you take a deep dive into the code and fix it? You were talking about this back in May, seems enough time has passed to familiarize yourself with the code meanwhile? Scratch you own itch, that's the Open Source nature of things.

In reply to by Jojo-Schmitz

A friend and I did in fact go through the source and try to fix the problem. what we found is that the MIDI file output implementation in MuseScore is very convoluted and far from transparent. We aren't confident that the changes we made are sufficient to fix the issue, and we weren't able to do any tests (the instructions for compiling MuseScore are an out-of-date mess). It would be better for someone else more familiar with the code to fix it, test it, do the pull request and so forth.

@BSG would you mind creating an issue for the end-of-track timing being wrong?

@.function_ Stating a fact is not misleading, especially not when agreeing with you on the desired behavior in the same paragraph.
As for the priority setting, I'll leave it as it is now, because far too much time has been spent on discussing this; Just note that I did not change the setting without consideration. Indeed read back into the instructions you've linked to; it is not because an issue is of major importance to a few, that is so balanced out against the full project. Exporting MIDI is far less common than entering notes for example...

I believe we've had a conversation about compile instructions a couple of months back already; Are you still attempting to build on a non-supported OS and is that the reason you think the instructions are an out-of-date mess? Or are they an out-of-date mess because you and your friend haven't been able to figure things out and as a consequence have not updated the instructions yet?
Once again you are invited to join the developers chat on IRC concerning compile issues.

In reply to by jeetee

MIDI file I/O is considered a basic feature of music notation software, and this is a major bug.

"Are you still attempting to build on a non-supported OS and is that the reason you think the instructions are an out-of-date mess? Or are they an out-of-date mess because you and your friend haven't been able to figure things out and as a consequence have not updated the instructions yet?"

No, I was never attempting to build on a non-supported OS. I develop on Mac OS. Mac users looking at the instructions for building on Mac OS will notice that those instructions are way out of date. To avoid the mess, my friend and I decided to work on his Windows system. Building was a bit beyond what he was comfortable using that machine for at the time. From what we could tell, Linux seems to be the preferred platform for building, and neither of us use Linux. Of course, this is another topic, which shouldn't be continued on this thread.

Last time you were developing on macOs 10.9, for which Apple stopped support. So you probably won't get the 'latest XCode' for it, as the developers' handbook asks for. Following the discussion of your build problems that page had been updated. It still is a moving target though, meanwhile Qt 5.9.1 is the latest (but using 5.8 still is OK for master and 2.x branches still ned 5.4).
You consider MIDI file I/O a basic feature, I personally don't. Much more important is PDF / print, i.e. sheet music, and, for exchange with other score writers, MusicXML.
Please stop complaining and agruing, instead dive into the code and fix it, I'm sure you can do it. At one time you even claimed it to be a 5 minute fix.

In reply to by jeetee

I think we are all wasting way to much time on this non-topic. It seems like the OP wrote some software that violates one of the most basic rules of data handling: "be strict in what you create but be lenient in what you accept". That's unfortunate but definitly not a major bug of MuseScore (probably more so one in miditapper). As for Industry Best Paractices: I just did a small test with some mainstream software on Mac and Linux. DAW: Garage Band, Logic X, Ardour, Reaper and QBase import a MuseScore-generated Type-1 midifile without any problems (notes from all three tracks get loaded, tempo changes are honored (except in Ardour which seems to ignore tempo changes)).
Notation software: both Finale and Sibelius import without any problems.
Algorythmic composition, MIR software: both music21 and CommonMusic import without any problems.
All tracks get imported.
N.B.: none of the software showed even a warning during import ....

In reply to by Jojo-Schmitz

Excuse you, but "complaining and arguing" have come from others here, not from me.

"At one time you even claimed it to be a 5 minute fix."

For my own software, code that I wrote. For MuseScore, for a person who wrote the existing code, it should be as easy, but as I just said, the code for exporting MIDI files is remarkably convoluted and opaque to an outsider. I already explained that we did our best to sort it out and implement a fix ... read above.

In reply to by rmattes

mattes, pushing off a major bug in MuseScore as a fault in my software is beyond ridiculous. As for all the other software importing a file with no problems, you've made a lot of claims without any proof. Include the MIDI file exported from MuseScore so anyone can import it into any software and see what's wrong with it. Just because software imports a file and doesn't give you a warning, and even plays it back, doesn't mean the file is fine and everything is hunky-dory. See BSG's other post about how MuseScore's wrong treatment of Track 0 results in other problems with other software. I suppose you are going to blame all the other software for that too.

In reply to by neo-barock

Hate to argue against myself, but, in fact, the software that had trouble was my own software, so I can and did recode it (and it will forever have to do that to handle unfixed files already written), but MS should clearly be fixed in this regard, as well as keeping notes out of track 0 for what my opinion is worth.

Title MIDI export: Track 0 should not contain note events MIDI export: the first track should not contain note events

How are we supposed to handle local time (or even key) signatures, if they all “have to” be moved into the first track? (Caveat: midi2csv labels the first track as 1, not 0; mind that to avoid confusion.)

I decided to not work on this and focus on the collision problems for now. Both the code and the MIDI format are hard, and MuseScore is, first and foremost, about notation, according to lasconic. Then, playback, and finally MIDI export.

In reply to by mirabilos

The decision not to move to the first track right now is fine with me. I don't understand what issues of local time (!?) and key signature you are talking about. AFAIK everything but notes gets moved, time signatures, key signatures, and all. The file cannot be understood without interpreting the first track. You can prove it by doing it (in secret) and re-importing the result and prove that it's fine. I have done this with external code, and everything works well (with MS and Hauptwerk as interpreters).

In reply to by [DELETED] 1831606

Mh, I just said I personally wasn’t working on it. Perhaps someone else will, but probably not in time for 2.2; I am currently trying very hard to get the collision problem at least fixed for 2.2.

You can have different time signatures in different staves, and one stave is exported as one MIDI track, so time signatures can NOT generally be moved to the first track (from a MuseScore PoV; I do not know enough about MIDI to comment from its PoV).

In reply to by mirabilos

Eeek. There certainly can be different time signatures on different staves (easy case 9/8 against 3/4 etc.) and I don't know what the MIDI standard expects there. That is a serious issue. On the flip side, perhaps the reason I forgot about that is that MuseScore, as far as I know, is incapable of expressing that. You can change the way a time signature appears on a staff, but not its real content (am I wrong?). But, clearly, there is an issue of what should be moved, and leaving it as is seems sane for now. (I misparsed "local time .... and key signatures", sorry!) OTOOH, @Demircan above shows us a message box from Sekaiju COMPLAINING about time and key signatures not in track 0!

MuseScore does support local time signatures (eg 3/4 and 9/8 simultaneously). See the Handbook under Time signatures for more. There are lots of limitations with this, so.I wouldn't be overly concerned about the MIDI aspect In particular.

In reply to by mirabilos

Local time signatures are midi-illusory. I did a little experiment (attached file), and local time signatures entered with control-drag do not go into the midi file; the meaning of ticks in every track is identical, and determined by time-signatures entered in the conventional way. Even dropping a local time-signature onto the first track (which seems to expose a bug, i.e., I can't enter the last eighth) does not change the meaning of ticks in that or any track.

It's easy to imagine that chaos would result if a midi-player required potential different click-clocks, as it were, for different tracks. Ergo, time signatures appearing in multiple tracks can be ignored, if gathered from the first/control track.

Local time signatures are an amazing feature, but (thank goodness) seem not a MIDI one.

Attachment Size
LocalTime.mscz 7.2 KB
LocalTime.mid 436 bytes

OK, good to know, thanks for your experiments. So I^Wthe person who’d implement it would just discard any time signature information from all tracks but the first?

Severity S3 - Major S5 - Suggestion
Frequency Few
Priority P2 - Medium
Regression No
Reproducibility Always
Workaround No
Reported version 2.1  

So is everyone now agreed that, from a pragmatic perspective, MuseScore should not include notes in track 0 of its MIDI exports?

IMHO it's not important whether including notes in track 0 is or isn't technically a violation of the MIDI standard. The practical advantages of adhering to a long-established and generally accepted best practice (i.e. compatibility with other software) seem clear and of far more relevance to users than what may or may not be written in, or intended by, a document from the last millenium.

In the duplicate issue linked by @Jojo-Schmitz immediately above, I think @joeshirley summed it up best (https://musescore.org/en/node/273557#comment-840828):

"The question I cannot find an answer to is, what advantage does MuseScore gain by putting note information into Track 0. If there is no compelling advantage, it would seem best for the greatest number of users to consider this a bug to be fixed in a future update. Perhaps the resistance to updating the practice has at least partly to do with the fact that MuseScore's implementation of midi import/export is convoluted enough that this is not a simple fix?"

So I'm hoping that the only thing blocking resolution of this now is finding someone sufficient motivation and patience to actually change the code?

In reply to by adam.spiers

You’re kinda late to the party, this was already resolved into that direction some time ago.

Understanding is also needed. I tried to look into this some time ago but couldn’t even figure out how to disentangle things that should and shouldn’t be in track 0. MuseScore internally is not exactly built as MIDI software; rather, it’s got a sound renderer whose byproduct happens to be standard MIDI files.

In reply to by mirabilos

"You’re kinda late to the party"

Yes, this is inevitable in FOSS projects where anyone is free to join at any time ;-)

"This was already resolved into that direction some time ago."

I thought there was probably consensus, but it's not really clear from reading this issue, which is why I asked for clarification. Thanks for helping provide that!