Tests have proven that the largest portion of CPU time is spent compiling the abc mark-up that is to be sent as input to the *abc2ps *external renderer.
However, there is no reason for the whole score to be re-compiled each time a note pitch is altered, for example. Instead, an incremental compilation method needs to be devised, where cue points are maintained in the resulting abc mark-up, that point to specific structures in the data model, starting at voice level. As the user will most often work to change a voice's content, having only the abc needed to render that voice compiled at a time will GREATLY improve performance. After compiling the fragment, it is inserted in the existing abc mark-up based on the cues location (and the subsequent cues' position will be updated as needed, to remain true to the structures they represent).
Unfortunately, it is unclear, for the time being if similar optimizations can be employed to the SVG rendering process.