Introducing Telemetry in the Notation Software
We're all aware that a lot of improvements need to be made to Musescore 3. In the past, our focus has been directed by conversations held on the forum or feedback sent directly by users. In addition to this, we have now started to regularly user-test the app on a wide variety of musicians (thanks to Tantacrul). This is helping us get a sense of who our users are and where they are succeeding or struggling with Musescore 3.
Feedback and forum conversations help us learn about bugs, missing features and underperforming functionality; testing allows us to see what parts of the app are working well and what parts are invisible to the user. However, there are a lot of other things we want to understand. For example, we currently populate a significant portion of our top bar with large buttons for the following: New, Open, Save, Save to Cloud, Print, Undo, Redo, Zoom Controls & 'Concert Pitch'. What we don't know is the percentage of our audience who use these buttons instead of the file menu or shortcuts. Tantacrul: "From my previous experience designing Paint 3D at Microsoft, I can tell you that our most used control was the Undo button and I would expect that to be the case here." However, we have a suspicion that print, save to cloud, zoom controls and 'concert pitch' are far less used and could safely be moved elsewhere to make space for functions that get a lot more day-to-day use.
The ability to look at telemetry data to understand what aspects of the interface are being used is a vital tool in helping us prioritise effectively. Take the inspector, for example. We all know that it needs to be rearranged to provide better contextual options but how should we go about doing that? Our first problem is trying to determine the 'value' of a certain functionalities over others. If we determine that a feature is niche and demote it to a secondary or 'advanced' menu, there are inevitably going to be complaints. The problem then is determining whether those complaints represent 0.1% of users or 10%? Telemetry can help us answer these types of questions.
Telemetry would also help us determine whether new designs are performing well. At the moment, there's an open question about how many people have used the new customisation features in the Palettes panel and whether it has made much a difference to the experience of first-time users. If we suspected that aspects of the new design are suffering due to poor communication, we could track small experiments to validate that suspicion. In some cases, a big impact can result from very simple changes like rewording a phrase or making an icon less ambiguous.
High level telemetry
Another goal of telemetry is to track why Musescore is being used. For example: how many of our users open Musescore to complete quick musical tasks, like converting one file format to another? How many use it for large-scale orchestration and how many export parts? We'd like to know how much time is spent in the app too: how many sessions are under 30 seconds and how many are over two hours? How many sessions include an export to a midi file? High level telemetry should help us measure retention and stability more reliably too. It would be very useful to know how many people who have downloaded the app in the last 6 months are still using it? What percentage of sessions end in a crash?
The dangers of relying on telemetry
Despite everything mentioned above, I think it's important to state here that we do not intend to rely on telemetry to make decisions. Telemetry can tell you that something isn't being clicked on but it can't tell you whether the problem is down to bad design or just general disinterest. All telemetry can do is provide the smoke. Feedback and user-testing will show us the fire.
Respecting privacy & being transparent
We are now looking into systems that will allow us to track how people interact with Musescore without collecting any personal information and we think the most open way to do this is to ask permission from users (probably via a dialog box) when they install a new version.
This is our thinking so far and we'd like to open it up to the community to get your thoughts. There have been plenty of open source projects that have collected telemetry in the past. The ones that have been relatively successful (Citra, TimeScale DB, etc.) were open and transparent about the process. However, we're aware that there can be a resistance to information collection of any kind and we'd also like to hear the arguments for not doing it, too.