GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. If nothing happens, download GitHub Desktop and try again. Go back. If nothing happens, download Xcode and try again. If nothing happens, download the GitHub extension for Visual Studio and try again.
|Country:||Turks & Caicos Islands|
|Published (Last):||3 May 2019|
|PDF File Size:||3.29 Mb|
|ePub File Size:||18.52 Mb|
|Price:||Free* [*Free Regsitration Required]|
This article is about developing aspects of musical expression, utilizing a number of existing Csound opcodes. Csound has many features through opcodes which provide controls to manipulate the sound at a detailed level, allowing for expressive and dynamic music making. By implementing expressive features, the rendered audio can have qualities which evolve and change over time, in order to achieve a result which can help with the gestures of expression.
Versions used for this article were Csound 6. The general subject of musical expression is a large area for study and research because it also includes the psychological aspects of the listener's perception. De Poli had proposed the notion of "complete" and "partial" computation models for observing music expression  , where the partial model is that which aims to only explain what can be explained by a robust set of rules such as the note level.
Following that idea, the descriptions below, at the opcode level, provide guidelines and code for help with the gestures of musical expression.
In computer music, because there is often no common practice notation score, expression can include just about any aspect beyond the given pitches of a piece of music. Approaches to expression include aspects such as signal generation and modifiers, audio and MIDI control, and real-time features, all of which can all be controlled at a detailed level using Csound opcodes.
Because the code often takes the place of the common practice score, expression in computer music can also be extended to include ideas of the elegance of code's expression, such as "expressions", macros, UDOs, and functions.
These are just a few of the ways of achieving expression in the design of the code. When working with code, compared to the live performer's neuro-muscular responses, the process is magnified, prolongated, and mathematically and scientifically calculated and adjusted to achieve the desired result.
This is without the sense of touch, pressure, vibration, sound location, and level of intensity that the live performer feels and reacts to immediately. If expression in the music we are creating is our goal, then we want our code to represent, on some level, the same expressive quality that a live performer is able to instill in the music.
In one sense, the ability of computers, through audio applications, to recreate the broad range capable of human musical expression is an aspired goal.
While sampling and realtime applications with human computing interfaces have helped solved the challenge of creating expression to a great extent, achieving expression or expressivity still remains arduous in many audio applications. The level of detail with which the user is able to manipulate sound in Csound is a tribute to the application's longevity and the willingness of opcode developers to continue to provide tools that are flexible under a wide range of conditions for producing computer music.
With time and patience it is possible to develop musical gestures over which one has extensive control using Csound. Envelopes provide help with expression in several important ways. Nuance or shaping the sound, by controlling the amplitude over time, is a primary function of envelopes.
Shown below is an oscillator as envelope expressed using Csound's function syntax . For a legato sound, in the standard numeric score, a negative p3 value for duration implies a held note. The negative p3 field in the score may also be recognized by a conditional statement in the instrument to branch conditionally and apply envelopes. Csound is a modular software synthesizer in that it contains specialized modules, opcodes, and a number of different variable types utilized as input and output variables for opcodes that can be used to control instruments.
An understanding of Csound update rates setup only, i-rate, k-rate, or a-rate for variables, and how ksmps functions to set the number of samples in a control period for the resolution of the updates, are practical and helpful when crafting signals that may change over time .
A number of opcodes show the maturity or evolution of the opcode's development to include more control, for example port and portk , oscili and oscilikt , and jitter and jitter2. Opcodes are often employed in various combinations to produce an output. If an opcode accepts arguments at different rates, it is called polymorphic .
Primarily utilizing an opcode alone for sound generation, for lack of a better word, we might call "monousance". In general, when an opcode has more input variables, of various types, it signifies a greater potential for expressivity. A very simple vibrato, for example, is achieved by employing a low frequency oscillator at a given amplitude, and summing that with a nominal frequency. The ability to have greater control over the vibrato for expressivity has resulted in a more intricate opcode, such as vibrato by Gabriel Maldanado.
The syntax for vibrato , which is a key component in music expressivity, and available in the Csound Manual , is shown below. Most of the input variables are k-rate variables, and allow for changing the function of the vibrato over time. Amplitude and frequency components are all separated, and ordered by random, average, or minimum and maximum amounts. The phase amplitude or frequency are summed with a beginning amplitude or frequency, and those are multiplied times a target amplitude or frequency, then the whole phrase is multiplied times a default random amplitude or frequency.
These values are able to update based on the instrument's ksmps or control rate which helps provide the means for a very powerful and expressive vibrato. Csound's FM instrument models and STK models also have a large number of input control variables, many of which are designed to simulate aspects of the physical nature of the sound. They easily allow for the implementation of expressivity.
See more informatin on the STK opcodes, below. The human performer's approach to expression in music can become an artful mix of the inner emotion, intuition, and impulses for the instantaneous outward creation of sound.
Expression, by the performer, in realtime performance, is implemented as a neuro-muscular response and an interaction between the live performer and his or her instrument as the music is sounding. The feelings or urges of expression are combined with years of careful study and neuromuscular feedback to master the extent of expression possible on an instrument or voice. Playing a live instrument allows you to make a change instantaneously, and you can see, hear, feel, the result immediately.
Murray-Browne et. Working with realtime properties for live performance, utilizing controllers, GUIs, and instruments, is a very strong method of achieving a human feel and expressivity when working with computer music. The ability of the performer to interact instantaneously with an instrument, making decisions which affect the musical outcome, involves a multitude of data, conditions, branches, and decisions, which can possibly be represented by many lines of code in attempting the same level of decision making in a computer music code-driven approach.
The STK Synthesis Toolkit opcodes contain up to 8 k-rate controller pairs which consists of a controller number or kc , followed by a controller value, or kv. These parameters can be implemented with code such as a simple line opcode to change controller values over time, or they can also be assigned as MIDI controller messages, using for example the outkc opcode to send MIDI controller messages at the k-rate. Tempo and rhythmic analysis are often used to measure expressivity in performance and to differentiate between peformances.
Words such as interpretation, groove, feel, stretch etc. On the code level, Csound has opcodes which allow for dynamic changes in tempo and rhythm as output values which can be utilized as control input values for sound producing opcodes. The loopseg , loopsegp , looptseg , and loopxseg opcodes generate control signals between two or more spcified points. The main differences are that loopsegp allows for changing phase at the k-rate, looptseg contains a k-rate trigger, and loopxseg employs exponential segments between points.
As an alternative example of longevity and maturity in instrument building, consider the input variation available on the computer music instrument shown below. The extent of variation available in this instrument is not unlike when a human performer plays the violin, where various choices must be made, and random events occur, when producing the sound. Extending Csound by building your own custom expressive opcode can be done as a plugin library or in the manner of a new unit generator. Building a plugin library can be accomplished by compiling your code and the required header with the Csound source code using CMake which will generate the plugin library, or compiling just your code and the required header to a shared library using a command line or an IDE, and linking the generated library to Csound via libcsound .
Unit generators, as well as other standard opcodes and various Engine and Top Level functions, are compiled as part of libcsound or libcsound These modules are listed as arrays which include among other aspects, their ins, outs, and rates in entry1.
They are also listed as libcsound sources in CMakeLists. If one designs an original expressive opcode, then creating a user-defined opcode helps as a guide or proof of concept for how the opcode should function. Creating a user-defined opcode helps with being able to view the inputs and outputs to the opcode clearly which will become part of the of OENTRY that defines the opcode when the code is compiled and built as a shared library to become a plugin opcode.
The C code for compiling and building the tremelo plugin opcode is located with the example files for this article. The code and UDO shown above, both utilize John ffitch's lfo opcode from schedule. In this example the frequency modulation employing a low frequency oscillator is changed to amplitude modulation, creating the tremelo effect. In this article, the usefulness of utilizing Csound for the creation of expressive musical gestures has been shown, with emphasis on descriptions at the opcode level.
Opcode development in Csound has provided tools which are flexible under a wide range of conditions, and make it possible to develop musical gestures over which one has extensive control. The following is a brief conceptual idea for smart instruments that are semaphore-like, global instruments which analyze, adapt, and change other instruments based on the acoustic environment and conditions.
The concept includes aspects of existing approaches to computer music such as analysis, filtering, and mixing or summing, and also the use of existing opcodes in Csound which allow one to assign variables for input. Not unlike a filter with a feedback loop, the difference would be instead of affecting the sound as a filter does, the smart instrument would contain a plug or hook back to the original sound producing instrument and would send it data to notify it to change its variable values, affecting the way it is producing sound at a particular point in time.
Thus smart instruments are a kind of watch dog instrument, monitoring thru analysis, and notifying instruments to adapt and change their behaviors. In terms of expression, monitoring for averages, and statistical variances above and below the norms at critical temporal and amplitude points could provide adjustments in say for example vibrato, envelope, tempo, timbre, etc.
The inputs to the semaphore instrument might include various envelopes and instrument hooks, where there would be an envelope to analyze and send back data to the instruments on the state of vibrato, or global amplitudes for example. The semaphore instruments would function on a meta-level adding a kind of polishing or mastering to the music.
Interestingly this is something most live performers are not able to do and they leave the overall sound to the sound engineer running the sound system. The difference here, is the semaphore instruments do not change the sound or level of expression, but notify the instruments to change their behavior. Many of the audio analysis features available in the MPEG-7 ISO standard might provide possible solutions for helping to analyze audio content . Introduction This article is about developing aspects of musical expression, utilizing a number of existing Csound opcodes.
Music Expression The general subject of musical expression is a large area for study and research because it also includes the psychological aspects of the listener's perception. Csound Features and Opcodes In one sense, the ability of computers, through audio applications, to recreate the broad range capable of human musical expression is an aspired goal. Tempo and Rhythm Tempo and rhythmic analysis are often used to measure expressivity in performance and to differentiate between peformances.
An Alternative Instrument As an alternative example of longevity and maturity in instrument building, consider the input variation available on the computer music instrument shown below. Opcode building Extending Csound by building your own custom expressive opcode can be done as a plugin library or in the manner of a new unit generator. Conclusion In this article, the usefulness of utilizing Csound for the creation of expressive musical gestures has been shown, with emphasis on descriptions at the opcode level.
Proposal The following is a brief conceptual idea for smart instruments that are semaphore-like, global instruments which analyze, adapt, and change other instruments based on the acoustic environment and conditions.
The Canonical Csound Reference Manual. Other Contributors. Reference Orchestra Opcodes and Operators! FLjoy FLkeyb - Experimental, no documentation exists. May be deprecated in future versions.
HOW TO USE THIS MANUAL
The goal of this manual is to provide a readable introduction to Csound. In no way is it meant as a replacement for the Canonical Csound Reference Manual. It is intended as an introduction-tutorial-reference hybrid, gathering together the most important information you will need to work with Csound in a variety of situations. In many places links are provided to other resources such as The Canonical Csound Reference Manual , the Csound Journal , example collections and more. It is not necessary to read each chapter in sequence, feel free to jump to any chapter that interests you, although bear in mind that occasionally a chapter may make reference to a previous one. BASICS provides a general introduction to key concepts about digital sound, vital to understanding how Csound deals with audio.