![beaming across staves sibelius 8 beaming across staves sibelius 8](http://www.rpmseattle.com/of_note/wp-content/uploads/2015/10/image2.png)
In my mind, and in Komp, our idea of a 'voice' is such that, in a given staff, if you wanted to have two voices, i.e. I have encountered a case where widely used applications are relying heavily on the element, in contradiction to other widely used applications. As indicated by Joe's comments above, it may also interact with issue w3c/musicxml#35 on clarifying the chord element. This is related to issue w3c/musicxml#33 regarding documenting the interaction between the voice element and other elements like the beam element. When resolving this issue it will indeed be important to keep the design independent of the limitations of much of today's music software, keeping the concepts faithful to what we see in music notation produced both by computer programs and from technology predating the computer. Layout is just an inheritance of this theory. I think, in a far future, it will serves us all when musicXML matches music-theory as much as possible. My personal opinion is that it is not logical to use an ancient naming-convention for other aims because of practical issues and inconveniences in existing software. Is there a possibility to consider creating new attributes in the voice-element for analytical purposes? (Or maybe even creating new children within the voice-element. Isn't it confusing to invent new elements with other naming-conventions than our music-theory-history has thought us for years? In a reply on Michael's last post below about the voice-element. An alternative could be to expand the voice element with attributes or child elements: However, Bob Hamblok ( pointed out later in the same thread that this could lose a meaningful and valuable name from our markup. This could be done with a separate element for the analytic voice concept, perhaps using a related term such as "line". MusicXML would be clarified by separating out the concepts of using voice for visual polyphony and layout vs using voice for analysis of polyphony. Allowing any combination of voices within a runs counter to this understanding, makes the tag essentially meaningless, and will create even more confusion in what is already a foggy semantic landscape. We already have / to express simultaneity across voices, and to express simultaneity within a voice. I don't think the industry has the luxury of upending a strong de facto understanding of MusicXML's and elements. I dislike using the phrase "de facto" in a dialogue like this, but given the lack of a spec, it's the only recourse. To clarify this point: if notes in a chord can have different voices, would you also say that they can have different stems? Different beams? If so, how would one determine which stem or beam applied to any given note? Is one supposed to fragment a chord into subsets that share the same (assuming that the tag is even used, since it is not required)? That would be big news to vendors, since (as with ) the and elements have hitherto been associated with exactly one note in a grouping, and typically the first. Īre we now changing this interpretation? If so, this makes it difficult or impossible for MusicXML importers to determine which simultaneous notes in a should share stems, be horizontally aligned, etc. For the latter, as Peter pointed out, we already have the mechanism of and. Even though Sibelius happens to apply only to the first element of a chord, as Myke says, a (as understood by Dolet, Sibelius, Finale, Noteflight and many other programs) has always been a unitary grouping within a voice, not across voices. Likewise, the use of has been restricted to notes within a specific voice that are rendered as a coherent visual unit with shared alignment, stemming, beaming - which is also the usual engraver's notion of "chord". The de facto use of the tag in MusicXML has always been to define explicit, visual polyphony within a part, rather than some abstract "analytical voice" concept. Joe Berkovitz ( summarized the issue well in the MusicXML forum at : In the majority of its implementations as an interchange format, it represent visual rather than analytical polyphony within a part. In its MuseData origin it was associated with editorial information and polyphonic musical analysis. MusicXML 3.0's treatment of the voice element is confusing.