Monday, December 30, 2013

Blog Status : Moved, not dead!

Hi all,

Once upon a time I posted on this blog (somewhat) regularly. Currently I dont. Why? I'm running OpenAV Productions, and that is where the updates are!

If you're still interested in Linux audio, C++ programming or software in general, checkout the site:
www.openavproductions.com

Developers may have particular interest in the developer topics like implementing NSM or dealing with memory in real-time.

Audio programming folks, checkout some articles that I've written on the topics of real-time programming, memory management, implementing NSM and more:
http://openavproductions.com/conferences.

I probably won't post here for another long time, so bye for now! -Harry

Saturday, June 22, 2013

Real time audio programming languages

Introduction

Over the last couple of years I've written various real-time audio programs. Its difficult to adhere to real-time regulations: you've got to get threading and memory management right.

C++ is the often concidered the obvious choice language for large real-time audio programs: its a compiled language, and is deterministic in time if used carefully. This is necessary for real time (RT) work, and rules out VM based languages like Python for any low-latency work.

In C++ there's many a way to achieve real-time, one of which I have detailed here: https://github.com/harryhaaren/fypRealtimeCppPrograming

Other languages

C++ is one way to go, but in recent times there are various other programming languages which are becoming increasingly attractive to the real-time audio programmer. Particularly these two languages have caught my eye recently:

Rust : http://www.rust-lang.org
Iolanguage: http://iolanguage.org

Both of these languages have certain characteristics which make them possible candidates for RT programming.

Rust

Language Overview

Rust is a language that focuses on "blocks", using boundaries. Integrity, availability and concurrency are its main goals. It uses lightweight tasks with message passing for concurrency, no shared memory.

The Interesting Stuff

I'm most intrigued by the memory management of the language: everything is static unless declared "mut" (or "mutable"), and ownership of objects is very strict. This means that managing resources in a real-time safe way is well defined, and hence the code will be maintainable.

Three different "pointer" types exist, as well as new concepts like owned boxes and managed boxes... these new concepts may aid memory allocation troubles, but perhaps it complicates them too, I don't have much experience yet with it, so only time will tell...

Learning It

Most of what I know comes straight from their homepage or tutorial:
Homepage: www.rust-lang.org
Tutorial: http://static.rust-lang.org/doc/0.6/tutorial.html

 

Conclusion

A cool language, and if the memory concepts prove useful, it could be an awesome new language to learn for the audio-programming enthusiast.

 

IOlanguage

Language Overview

This is a smalltalk inspired language, while also incorporating various different elements from other languages together. Actors based concurrency is used (a la Act1), while it is also kept small for embeddable purposes. Runs in a small VM.

The Interesting Stuff

Intensive inspecting of object instances / program state (like LISP) aids debugging significantly. Extensive concurrency possibilities: co-routines, actors, futures and yield statements allow for flexible "time" programming.

 

Learning It

Extensive documentation and example code here:
http://iolanguage.org/scm/io/docs/IoGuide.html#Introduction

Conclusion

Cool language, unfortunately probably not fully real-time safe / deterministic due to running in a VM.

Sum Up

"So what language will I use for my next project?" I hear you ask: well I'm staying with the tried and tested C++ for a while. I've dabbled with Vala previously ( see ValaLooper and Prehear ), but they're not quite suitable to RT work in my opinion.

Although its nice to work with a slightly higher level language, its hard to determine if the generated code is genuinely real-time safe.

The perfect real-time safe code for me is code that is so simple, that proving its real-time safe under any conditions is trivial. Then the code is maintainable and readable.

Know of any RT capable language I've left out? Get in touch: I'm interested in hearing about it!

Sunday, February 10, 2013

LV2 and Atom communication

EDIT: There are now better resources to learn LV2 Atom programming: please use them!
www.lv2plug.in/book
http://lac.linuxaudio.org/2014/video.php?id=24
/EDIT


Situation: You're trying to write a synth or effect, and you need to communicate between your UI and the DSP parts of the plugin, and MIDI doesn't cut it: enter Atom events. I found them difficult to get to grips with, and hope that this guide eases the process of using them to achieve communication.

 

Starting out

I advise you to first read this : http://lv2plug.in/ns/ext/atom/
It is the official documentation on the Atom spec. Just read the
description. It gives a good general overview of these things called Atoms.

This is "message passing": we send an Atom event from the UI to the DSP part of the plugin. This message needs to be safe to use in a real-time context.

(Note it is assumed that the concepts of URIDs is familiar to you. If they're not, go back and read this article: http://harryhaaren.blogspot.ie/2012/06/writing-lv2-plugins-lv2-overview.html )

Step 1: Set up an LV2_Atom_Forge. The lv2_atom_forge_*   functions are how you build these events.

LV2_Atom_Forge forge;
lv2_atom_forge_init( &forge, map ); // map = LV2_URID_Map feature

Atoms

Atoms are "plain old data" or POD. They're a sequence of bytes written in a contiguous part of memory. Moving them around is possible with a single memcpy() call.

 

Writing Atoms

Understanding the URID naming convention

// we need URID's to represent functionality: There's a naming scheme here, and its *essential* to understand it. Say the functionality we want to represent is a name of a Cat (similar to the official atom example). Here eg_Cat represents the "noun" or "item" we are sending an Atom about. eg_name represents something about the eg_Cat.

something_Something represents an noun or item, while something_something (note the missing capital letter) is represents an aspect of the noun.

LV2_URID eg_Cat;
LV2_URID eg_name; 


In short classes and types are Capitalized, and nothing else is.

Code to write messages

// A frame is essentially a "holder" for data. So we put our event into a LV2_Atom_Forge_Frame. These frames allow the "appending" or adding in of data.
LV2_Atom_Forge_Frame frame;


// Here we write a "blank" atom, which contains nothing (yet). We're going to fill that blank in with some data in a minute. A blank is a dictionary of key:value pairs. The property_head is the key, and the value comes after that.
Note that the last parameter to this function represents the noun or type of item the Atom is about.
LV2_Atom* msg = (LV2_Atom*)lv2_atom_forge_blank(
                &forge, &frame, 1, uris.eg_Cat );


// then we write a "property_head": this uses a URID to describe the next bit of data coming up, which will form the value of the key:value dictionary pair.
lv2_atom_forge_property_head(&forge, uris.eg_name, 0);
 

// now we write the data, note the call to forge_string(), we're writing string data here! There's a forge_int() forge_float() etc too!
lv2_atom_forge_string(&forge, "nameOfCat", strlen("nameOfCat") );

// Popping the frame is like a closing } of a function. Its a finished event, there's nothing more to write into it.

lv2_atom_forge_pop( &forge, &frame);

 

From the UI

// To write messages, we set up a buffer:
uint8_t obj_buf[1024];

// Then we tell the forge to use that buffer

lv2_atom_forge_set_buffer(&forge, obj_buf, 1024);

// now check the "Code to write messages" heading above, that code goes here, where you write the event.

// We have a write_function (from the instantiate() call) and a controller. These are used to send Atoms back. Note that the type of event is atom_eventTransfer: This means the host should pass it directly the the input port of the plugin, and not interpret it. write_function(controller, CONTROL_PORT_NUMBER,
               lv2_atom_total_size(msg),
               uris.atom_eventTransfer, msg);



From the DSP

// Set up forge to write directly to notify output port. This means that when we create an Atom in the DSP part, we don't allocate memory, we write the Atom directly into the notify port.

const uint32_t notify_capacity = self->notify_port->atom.size;
lv2_atom_forge_set_buffer(&self->forge,
                         (uint8_t*)self->notify_port,
                          notify_capacity);
 

// Start a sequence in the notify output port
lv2_atom_forge_sequence_head(&self->forge,
                             &self->notify_frame, 0);

Now look back at the "Code to write messages" section. that's it, write the event into the Notify atom port, and done.




Reading Atoms


// Read incoming events directly from control_port, the Atom input port
LV2_ATOM_SEQUENCE_FOREACH(self->control_port, ev)
{

  // check if the type of the Atom is eg_Cat
  if (ev->body.type == self->uris.eg_Cat)

  {
    // get the object representing the rest of the data
    const LV2_Atom_Object* obj = (LV2_Atom_Object*)&ev->body;
 

    // check if the type of the data is eg_name
    if ( obj->body.otype == self->uris.eg_name )
    {

      // get the data from the body
      const LV2_Atom_Object* body = NULL;
      lv2_atom_object_get(obj, self->uris.
eg_name,
                          &body, 0);
      

      // convert it to the type it is, and use it
      string s = (char*)LV2_ATOM_BODY(body);
      cout << "Cat's name property is " << s << endl;
    }
  }
}



Conclusion

That's it. Its not hard. It just takes getting used to. Its actually a very powerful and easy way of designing a program / plugin, as it *demands* separation between the threads, which is a really good thing.

Questions or comments, let me know :) -Harry

Thursday, January 17, 2013

MlTutorial: Working with the MediaLovinToolkit

Hi!

I've been interested in doing some video coding for a while now, but never really got into it yet. Until today, when I re-attempted (yes, I'd tried before :) to achieve some simple functionality with MLT.

Initially I found it very difficult to find any resources as to how one can use the MLT framework from C++, but some googling led me to various resources scattered around the internet.

The MLT github repo as a super-simple example (which although informative doesn't scale up to the use of filters or any advanced functionality):
https://github.com/mltframework/mlt/blob/master/src/examples/play.cpp

A search around the net showed me this post on a forum http://ubuntuforums.org/showthread.php?p=7370184
This seemed to be more along the lines of what I had hoped for, however the code segfaults upon running...

Finally the "tests" subdir in the MLT tarball provide some test program code, but its difficult to understand (IMO) as its not commented for learning purposes: https://github.com/mltframework/mlt/tree/master/src/tests

So between these resources I've decided to bunch together some examples of how to use MLT using C++. The code is online on github, and may be useful to others hoping to learn to use the MLT framework.

There's currently two "playback" tutorials, and one "filter" tutorial. Reading them will show the rough design of the MLT library, and how to use it. Advanced functionality tutorials will be added as I learn it myself :)
https://github.com/harryhaaren/mltutorial

Welcoming issues / merge requests from MLT users / devs / anybody!
Cheers, -Harry