YouTube tribute to the MuseScore team.

• Dec 14, 2015 - 21:33

I've just posted my latest MuseScore project on YouTube.

Dedicated to the developers of MuseScore, it's a short symphonic
first movement in the Classical style. My way of saying thanks for
a piece of software I cannot live without.

Even if you're not classically-oriented, I encourage having a
listen. The piece is short (3:40). What's notable is that the
playback is 99% pure MuseScore, manipulated entirely from within the
program. Other than chorusing for the winds when they're playing in
unison, which MuseScore doesn't do (despite a Mixer knob labelled
Chorus) and shelves applied at both ends of the EQ spectrum, no
external plugins or post-audio processing were used. Levels,
panning and reverb were all done in MS.

The results are very, very good. Enough so that I composed the
entire piece with nothing but MuseScore. I had no piano or other
keyboard to hand, no manuscript or sketches, just MuseScore. Having
spent hundreds of hours, if not more, assembling an orchestra
from free soundfonts on the Web, and a couple of weeks tweaking
levels, etc., I was able to compose confidently. Any ideas I had
with respect to scoring came out as expected. More importantly,
weaknesses in the orchestration were instantly revealed.

Part of my reason for writing the piece was, in fact, to make
a point about playback. Frankly, and with due respect to the
developers, I think the oft-repeated refrain, "MuseScore is a
notation program," is nonsense. MuseScore, regardless of
its original mandate, has become a functioning tool for musical
composition that provides both engraving and playback
facilities. Insisting otherwise is like selling F-150s with a
lightweight Class I hitch and excusing the inadequacy by saying a
pickup is supposed to be a truck, not a tractor.

Put another way, playback from MuseScore is too good to be dismissed
as an extra (I believe my little piece proves that), but not yet
good enough for the purpose it is expected to perform.

The goal of playback from MuseScore isn't symphony hall
realism, but rather a midi mockup that satisfactorily simulates
instrument/pitch/dynamic as well as the basics of articulation and
phrasing. To quote the estimable Mattias Westlund (of the Sonatina
Symphony Orchestra) on the subject of midi orchestration:

If you’ve done your homework you should be able to get a reasonably
convincing orchestral mix from the following ingredients and nothing

1. Arrangement
2. Samples
3. Panning
4. Levels
5. Reverb

Meaning, an arrangement that makes sense, samples that do their job,
panning that places things from left to right, levels that are
balanced, and reverb that adds depth and makes things gel.

The first item relies on the talent of the composer, and the second
is an Everest we all have to scale unless we have the budget for
expensive, proprietary samples and soundfonts. The remainder are
controllable from within MuseScore, and, as my piece shows, if you
take the time to get them right, the result is convincing.

What, then, does MuseScore need to facilitate and improve playback?
The answer is: not very much. Based on what I could not accomplish
with the present release of MuseScore, I came up with this short
list of missing essentials:

  1. Controllable gate offtimes, which is required for phrasing.
    Gate times used to be controllable in 1.x; it shouldn't be too
    difficult to re-implement.
  2. Hairpin volume changes through long notes. You cannot get
    a reasonably accurate representation of a four-bar orchestral
    crescendo if your double-basses, playing a tied pedal point, stay at
    the same volume while everybody else gets louder.
  3. Controllable staccato lengths on a per-articulation, or at the very
    least, per-instrument basis; a staccato blast on a trumpet requires
    a very different degree of shortness compared to a spiccato note on
    a violin.
  4. Chorusing on a per-channel basis. Flutes, for example, playing in
    unison have a significantly different sound from flutes playing solo
    (have a listen to Tchaikovsky if you don't believe me).
  5. It must be made possible to set the number of midi channels a
    staff requires, and to choose the soundfonts associated with those

As I said, it's a short list, and, other than volume changes through
held notes and chorusing, both of which might be problematic, not a
coding nightmare.

The last item is the single most important, and shouldn't, IMO,
be brushed aside (as it has been the few times I've suggested
it). If one can write con sord. over a trumpet part, right
click on it and select the "Muted trumpet" channel, one should
be able—sanely, reasonably, and intuitively—to write
a due over the flute part, right click on it, and select
a "Chorused flutes" channel, presupposing one has been added to
the staff. The same for stringed instruments, whose Normal,
Tremolo, and Pizzicato channels are woefully too few to handle the
differing soundfonts required for the changes of string timbre and
articulation that are essential to understanding what's being said

I was able to get MuseScore to behave with respect to multipe
channels per staff by manually editing my .mscx file. It allowed
me to switch between solo and chorused winds (chorusing provided
externally), and to access the soundfonts necessary for the frequent
changes of string articulation and timbre. I think the results
speak for themselves and make a solid case for being able to add
channels to a staff from within the program itself.

I realize the items on my list could be posted in Feature Requests,
but I'm more concerned with getting playback to be treated as an
integral part of the MuseScore picture than specific work that
needs to be done. I'm hoping posting here instead will stimulate

That said, a huge thanks to all the developers and users who've
made MuseScore the truly wonderful tool it is.


I listened to your piece: It is really good. And you are right. You got the most out of musescore. Well done, really; I am really impressed:

Thanks for your comments, and congratulations on the wonderful composition!

FWIW, I agree with everythng you have written above except your statement "not yet good enough for the purpose it is expected to perform". Expectations are, of course, in the eye of the beholder. I think yours are higher than those of 99% of users. For most people, playback is already more than good enough, and realistically, almost no one but a small handful of extremely advanced users would take advantage of anything on your list except #2, which would of course benefit everyone immediately. And it's that sort of consideration - what improvements benefit the most people for a given amount of implementation effort - that drive decisions. Spending months or years developing a ferature than benefits only a dozen users is obviously not as smart a use of time as spending days or weeks developing a feature that benefits thousands of users. Firuging out how to draw that line is the tricky part.

In reply to by Marc Sabatella

Marc, the statement you disagree with is the one I fretted over most, and still didn't get it right. What I wanted to say, less formally, is that if MuseScore playback can be as good as my piece demonstrates, why not take it a couple of (relatively easy) steps further? There's so little that needs to be done to go from Cadillac playback to Rolls, at least for symphonic music, and all of it in the midi department. The integration with FluidSynth is already solid, and the superior zita reverberator is a godsend. All that's missing is a tiny bit more control over midi events (swell through long notes and controllable offtimes). I can live with adding extra channels to staves in my .mscx files instead of having a handy dialogue in the program itself, although I admit I can't understand why no one else seems to grasp the value of extra channels.

I've been promoting MuseScore (and its playback) for quite a while now on my YouTube channel (ie. every video is done with MuseScore), so my interest is as much in getting MuseScore positioned as the compositional tool of choice for Open Sourcers as it is in making my own music sound good.

Something to take into consideration is the educational value of the program. I learned orchestration the hard way in the pre-digital age: by trial and error, and that only if I could round up a student orchestra. I want to weep for lost time when I imagine how much more quickly (and probably better) I could have learned the art if I'd had a digital orchestra like the one MuseScore provided for my little symphonic piece. It's part of the reason I keep insisting that playback is 50% of the MuseScore picture even though there's—how shall we say?—resistance to seeing it that way. :)

All in all, of course, I freakin' love MuseScore. Talk about a life-changer!

In reply to by Peter Schaffter

My own list of big missing playback features:

1) The expression controller for dynamic changes without changing notes, which you mention and has been brought up before;

2) The option for a single dynamic marking (like sfz) to uniquely affect the single note it is applied to, and not all notes following. I can visualize the Inspector controls: "First note velocity" and "Continuing velocity";

3) Differing playback for different repeats, e.g., Tacet 2x, 8va 2x, p-f. I can't really visualize controls for this, though;

4) Orchestral drumkit as default patch for most percussion. Unfortunately, this has been vetoed because some third-party SoundFonts don't include an orchestra kit, but I thought I'd mention it again.

Things that would be super nice but probably not worth the effort of implementing them include the notorious slur question, and expanding the default SoundFont and instruments.xml to include muted sounds for more instruments than trumpet.

Just some of my own thoughts on this topic. Nice work with that orchestration, Peter!

In reply to by Peter Schaffter

Understood. My comments were meant to be general. The point being, everyone has their own list of improvement they hope might be "relatively easy" but would.make a difference, and unfortunately everyone's lost us very different. So the problem remains one of prioritizing.

BTW, instruments.xml already allows one to specify different lengths of staccato fire different instruments. Feel free to customize it, and if you find speciffun values you like, submit it for consideration for a future release!

In reply to by Peter Schaffter

I agree with you 10,000,000%. There's a lot of room to improve this aspect of musescore and I have some ideas for how it might be implemented in the software. In short, the trick to model tweaks as a graph of non-destructive edits. Let's call this performance modeling. For example:

Suppose you want to tweak the individual notes of a trill. Suppose you want a ritardando with the very last note sounded a bit louder than the rest. You could model that as a stack of operations where each node depends on the output of the previous.

[ 1. note object ] 
 '–[ 2. apply trill ] *output is a series of notes*
    '–[ 3. apply ritardando ] 
       '–[ 4. select last note ]
          '–[ 5. velocity offset +15 ]

These are non-destructive nodes, meaning you could tweak or completely remove the ritardando and the rest of the graph would remain valid.

Really impressive Peter! I very much enjoyed listening to your music. Thank you for dedicating it to the MuseScore developers.

In reply to by [DELETED] 5


  1. Every instrument change adds a new entry to the bottom of the Mixer, even when reverting to the original instrument associated with the staff.
    • Problem: If one has to make frequent changes, the Mixer list becomes unmanageably long.
  2. The soundfont assigned to the Instrument Change is the default from the Select Instrument dialogue, which won't be correct if another was originally assigned to the staff.
    • Problem: If, e.g., the flute from a soundbank is awful and an alternative flute soundfont is chosen as the default, one has to change the soundfont (in the Mixer) every time one does an Instrument Change back to the chosen default.
  3. The names of the instruments in the Mixer reflect the name of the instrument chosen from the Select Instrument dialogue and can't be changed.
    • Problem: If, e.g., one switches from Flute to Piccolo on a staff (an example only; one wouldn't of course do this), the Mixer will list the instrument as "Flute" with an associated soundfont of "Piccolo". This becomes a huge problem when switching between soundfonts, not actual instruments, for the purposes of achieving a particular articulation or timbre. If one has a "solo" flute soundfont for parts of a score where one doesn't want the flutes doubled, and "section" flute soundfont for when they're playing a due, it becomes nightmarish to navigate down the Mixer list to a batch of entries all labelled "Flute", all with the MuseScore default flute soundfont, and figure out which ones are intended to be solo and which a due so their associated soundfonts can be changed.
  4. Instruments added with Instrument Change use the Mixer's default volume and pan.
    • Problem: if this is not what is desired, every Instrument Change requires adjusting the volume and pan in the Mixer. And if one decides to change those settings (say, pan the flutes a little more to the left), every single Instrument Change requires being manually set to the new value in the Mixer.

In the symphonic piece I wrote, there are probably 50–75 places where I switch from one articulation or timbre to another (mostly in the strings, between legato and détaché, but also in the winds, between "solo" and "section"). If I had used Instrument Change to accomplish this, I would have had a Mixer with 50–75 additional entries, every one of which would have had to be manually assigned the correct soundfont, volume, and pan. For obvious reasons, I didn't go that route. Instead, I manually added channels to staves that needed them (in the .mscx file) and set them up once only in the Mixer, with the extra advantage that they appeared in the proper place in Mixer (flutes together, oboes together, etc.). Switching between them was done with Staff Text=>Staff Text Properties, hidden in the case of the strings, visible in the case of winds (i.e. "I." or "I.II."). Making changes to the settings of the channels was also a in-one-place/global-result operation, which was vastly easier than trying to manage the 50+ additional entries that would have resulted from using Instrument Change.

In short, managing soundfont switches through the use of additional channels in staves solves every single one of the problems associated with trying to use Instrument Change for the same purpose. It's why I propose that being able to add multiple channels to any staff from within the GUI is...well...kinda essential.

Bravo, Peter. This is a great demonstration of MuseScore's ability! I'm a relative newcomer to the program, but I too am incredibly impressed by it. The fact that it's free is just unbelievable. I've just competed my first piece with it ("Film Score Demo" in the "Made with MuseScore" section), and I still have much to learn. This thread was very helpful in that regard, so I wanted to say thanks for that!

That was an absolutely fantastic piece, and I agree with you on your 5 major points. If I may add a 6th, that I noticed would also be helpful with this score (and others):

6) Piano editor can manipulate multiple notes at once, and can manipulate more than the full note

Right now, in your piece, the trills currently suffer from a "machinegunning" effect- there's too much consistency in the off/on times of each note in the trill. Right now, it's impossible to manipulate in a sane manner without building an invisible trill out of normal notes of desired length. It's not just impossible to manipulate any of the individual notes inside the trill using the piano editor- it's impossible to manipulate the note as a whole, because you're technically manipulating multiple notes which the editor forbids.

In reply to by LuuBluum

The machine gunning actually has more to do with the soundfonts than the midi. The string trills don't sound half bad, but the woodwinds are in serious Gatling gun territory. It's because the samples used to build most woodwind soundfonts are tongued. The effect is therefore as if a real player were somehow able tongue every note of the trill. I don't think there's anything the pianoroll editor could do about that, since it's a question of attack, not ontime.

I agree that the pianoroll editor is something that needs work, though. The shape it's in now, it might as well not be there. I'm assuming that will change over time.

@Peter Schaffter: "Flutes, for example, playing in unison have a significantly different sound from flutes playing solo
(have a listen to Tchaikovsky if you don't believe me)"

-- Can you, please, provide a more precise reference to Tchaikovsky's music here?

Do you still have an unanswered question? Please log in first to post your question.