First Impression on Musicovery

Even though I’m into last.fm, if any (of those), I do understand if some Panora maniac cheets on them the alternative has to be worth a glimpse. So, it was Musicovery.com that also impressed me … at first. And I’ll admit it was mostly because of the blinky-blinky. But it’s more to it than just effects attention. From a HMI conceptional point of view Musicovery have really made an effort. It is easy to start listening to what you want — without any or, if you really want to, very little reading. In one word: I’d call it intuitive.

You are presented those, and only those, selections you need to do and combine (what you can) to make your choice distinct enough to gather correct songs. Also the other direction of “communication”, machine to human, has some promissing approaches like the “neighbourhood map” and colours for genres. One can even drag (move) that map around. The playlist is shown as path through the graph of audio tracks.

But then, of course, the hacker in me came to surface and I had to test that stuff. After a few clicks I was presented with Shakira’s “Objection” after hitting “dark” mood. Sure, no accounting for taste, but I wouldn’t call “Objection” a dark mood song. And also there was Black Eyed Peas’ “Shut up” to come… I don’t know about you; I couldn’t keep my feed still while listening and there where absolutely no “I hate the world” and “Where is my gun to get a rampage going” (just being sarcastic here). While the “energetic” direction has worked fine for a while dark more and more seams to be a bad label.

To conclude Musicovery.com nevertheless sounds very promising. I’d really like to know the “music selection techniques” behind it, though, since the more I listen to the tracks that are picked for a selected mood don’t satisfy me just like the other lot.

Edit: I just caught myself letting imaginary drift away: Wouldn’t it be possible to have, in a few years time, some HMI stuff so one brachiates though a play list just like the one displayed at Musicovery but as some sort of hologram or only imaginary (not directly visible) but more like that Wii stuff? So if one wants to ffw to a track on the playlist (displayed in some sort of 3D neighbourhood map/grid as a ball, e.g.) you grab it and drag it to the middle of the cube or punch it to play it, pet it to let information been displayed about it, …

Using fb2k’s Scripting Language In It’s Masstagger?!?

Well, what I’m trying to do I thought would be very simple: Add a new tag to each file that’s been added to fb2k’s media library holding the current system date and hence add a “added to library” tag. The tagz script to achieve this is not even the problem.

$if($meta(ADDED_TO_FOOBAR),,%cwb_systemdatetime%)

Or, what seams to be semantically equivalent but more readable (refer to tagz reference to understand the commands):

$if($not($meta(ADDED_TO_FOOBAR)),%cwb_systemdatetime%)

This even checks if the file has a tag already. Using the tagz parser in the preference dialog (Ctrl+P -> Display -> Title Formatting) confirms it’s working correctly when playing a song without a tag and one with the time stamp set.

The set-up is this:

  • with masstagger (right-click on song -> tagging -> manage scripts, if you haven’t changed the default context menu structure) add “Format value from other fields…“, select destination field name (ADDED_TO_FOOBAR) and use the stated script as formatting pattern. Hit Return and name your masstagger script and (important:) click the save button. Note: Using “Set value…” or the like will not work since it, despite intuitive guesses, does not evaluate tagz scripts but outputs it as a string.
  • in fb2k’s preferences dialog select Tools -> New File Tagger and from the drop-down list select Tagging/Scripts/your name (do this after extensive testing on single files with the file’s preferences box open!)

Now each time a file is added to fb2k’s lib this script is run on it. BUT: It doesn’t do what it’s supposed to! What ends up in the files tags is

  1. a ‘?’ for those with no time stamp
  2. deleting the existing field when value = ‘?’

That’s when I noticed the two scripts are not equivalent: The first add an empty string to the requested field if present where the later does simply nothing in that case because there is no else branch. But still, even the later does not do the desired job.

Then I came up with this script:

$if($not($meta_test(ADDED_TO_FOOBAR)),%cwb_systemdatetime%,$meta(ADDED_TO_FOOBAR))

But once again, the only effort is frustration but not the desired time stamp. After all it leaves existing fields untouched.

I could work around this issue with the “Stamp current Time and Date…” bit but since after reinstalling my OS and using fb2k before my music files partly are stamped already. Sidenote: Probably because of this the field name should rather be something like “ADDED_TO_LIBRARY”. Though moving on…

A working workaround I figured out is to

  1. add a “Stamp current Time and Date…”,
  2. use the following script
  3. $if($not($meta_test(ADDED_TO_FOOBAR)),$meta(TIMESTAMP),$meta(ADDED_TO_FOOBAR))
  4. add “Remove Field…” with selected “TIMESTAMP” field.

So that leaves me with speculating about a bug in either the masstagger or in foo_cwb_hooks. By the way, the time stamping option might be in foo_masstag_addons. You might want to include this masstagger script from a file.

ReplayGain using Foobar2000

In case you wonder — like I did — to employ foobar2000 (fb2k) to handle the lack of ReplayGain (RG) info in a file’s tags nicely so it doesn’t blast away your eardrum I have a set of very usefull links. At hydrogenaudio.org I found the Intermediate User Guide for fb2k explaining, among others, the options one has setting up fb2k for ReplayGain. Most importantly one should slide the bar in the playback preferences pane for “without RG info” to a value that reflects the average sound level of all tracks. This, of course, would mean to scan all your files. For my couple’o weeks worth of playback time fb2k estimates just over 24h for that job. So it was an easy choise to just stick to the suggested value -8db.

Last but not least, I will mention the two preamps in the Playback preferences. Except if you know exactly what you are doing, it is not recommended to raise the output of the preamps above the default 0.0dB in any way. However you can use these to slightly compensate for the difference between replaygained and unreplaigained Tracks. Simply estimate your average Replaygain level and lower the preamp for files without Replaygain info by that value. I found -8dB to work quite well for me. This obviously should not be used to compensate for not properly Replaygaining your tracks, but definetaly will protect your ears and your equipment when coming across tracks that miss Replaygain info.

Secondly there is a more detailed description on the Playback settings in the same wiki. Besides some mathematics on RG it also points out some interesting knowlege about pre-buffering and DSP settings

Music Analysis — On The Way to Diploma Thesis Topic

To step onwards in finding a subject for my diploma thesis I’ve googled a littel and found the following:

First of all I looked for what topics are being worked on at my uni to maybe narrow it that way. Our Institute for Digital Media seamed the best guess showing a seminar by Dr. Dieter Trüstedt called “Elektronische Musik in Theorie und Praxis” (electronic music in theorie and in practice”). Only after a while I noticed that it emphasis on, or I should say is making music, not analysing it. Nevertheless I was pointed to a book by Miller Puckette (Dept. of Music, University of California, San Diego) called “The Theory and Technique of Electronic Music” including some parts about wave analysis in generell, digital music, etc.

Issues I’m looking for are as described before, more precisely finding similar music as a starting point. I also found a few (not yet reviewe) papers:

  • Music Database Retrieval Based on Spectral Similarity by Cheng Yang
  • Pattern Discovery Techniques for Music Audio by Roger B. Dannenberg and Ning Hu
  • Toward Automatic Music Audio Summary Generation from Signal Analysis by Geoffroy Peeters, Amaury La Burthe and Xavier Rodet
  • Audio Retrieval by Rhythmic Similarity by Jonathan Foote, Matthew Cooper and Unjung Nam

Also, what came to my mind what to maybe take into account how humans (mammals) distinguish music (or complex sounds) and thus learn more about the brain, also.

Another thought that hit my mind concerning the use of such an analysis was to use it in, say meeting recording scenarios as some kind of search algorithm. Imagine you have some 3 hours of meeting recorded (possibly conference call) and need some certain part of but cannot find the time position by any means. Maybe by the analysis spread out above one can use a search just as we do nowadays with text: Speak the word or phrase one is looking for (with a different voice — your own) and find the position in the audio file.

Blogged with Flock