diff --git a/doc/MDS.rst b/doc/MDS.rst index 9ef008f..5e33509 100644 --- a/doc/MDS.rst +++ b/doc/MDS.rst @@ -1,14 +1,35 @@ -:digest: Dimensionality Reduction with Multidimensional Scaling +:digest: Multidimensional Scaling :species: data :sc-categories: Dimensionality Reduction, Data Processing :sc-related: Classes/FluidMDS, Classes/FluidDataSet :see-also: :description: - Multidimensional scaling of a :fluid-obj:`DataSet` - https://scikit-learn.org/stable/modules/manifold.html#multi-dimensional-scaling-mds + Dimensionality Reduction of a :fluid-obj:`DataSet` Using Multidimensional Scaling +:discussion: + Multidimensional Scaling transforms a dataset to a lower number of dimensions while trying to preserve the distance relationships between the data points, so that even with fewer dimensions, the differences and similarities between points can still be observed and used effectively. + + First, MDS computes a distance matrix by calculating the distance between every pair of points in the dataset. It then positions all the points in the lower number of dimensions (specified by ``numDimensions``) and iteratively shifts them around until the distances between all the points in the lower number of dimensions is as close as possible to the distances in the original dimensional space. + + What makes this MDS implementation more flexible than some of the other dimensionality reduction algorithms in FluCoMa is that MDS allows for different measures of distance to be used (see list below). + + Note that unlike the other dimensionality reduction algorithms, MDS does not have a ``fit`` or ``transform`` method, nor does it have the ability to transform data points in buffers. This is essentially because the algorithm needs to do the fit & transform as one with just the data provided in the source DataSet and therefore incorporating new data points would require a re-fitting of the model. + + **Manhattan Distance:** The sum of the absolute value difference between points in each dimension. This is also called the Taxicab Metric. https://en.wikipedia.org/wiki/Taxicab_geometry + + **Euclidean Distance:** Square root of the sum of the squared differences between points in each dimension (Pythagorean Theorem) https://en.wikipedia.org/wiki/Euclidean_distance This metric is the default, as it is the most commonly used. + + **Squared Euclidean Distance:** Square the Euclidean Distance between points. This distance measure more strongly penalises larger distances, making them seem more distant, which may reveal more clustered points. https://en.wikipedia.org/wiki/Euclidean_distance#Squared_Euclidean_distance + + **Minkowski Max Distance:** The distance between two points is reported as the largest difference between those two points in any one dimension. Also called the Chebyshev Distance or the Chessboard Distance. https://en.wikipedia.org/wiki/Chebyshev_distance + + **Minkowski Min Distance:** The distance between two points is reported as the smallest difference between those two points in any one dimension. + + **Symmetric Kullback Leibler Divergence:** Because the first part of this computation uses the logarithm of the values, using the Symmetric Kullback Leibler Divergence only makes sense with non-negative data. https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Symmetrised_divergence + + .. **Cosine Distance:** Cosine Distance considers each data point a vector in Cartesian space and computes the angle between the two points. It first normalizes these vectors so they both sit on the unit circle and then finds the dot product of the two vectors which returns a calculation of the angle. Therefore this measure does not consider the magnitudes of the vectors when computing distance. https://en.wikipedia.org/wiki/Cosine_similarity (This article describes the cosine _similarity_, as opposed to distance, however since the cosine similarity is always between -1 and 1, the distance is computed as 1 - cosine similarity, which will always range from a minimum distance of 0 to a maximum distance of 2.) :control numDimensions: @@ -16,15 +37,37 @@ :control distanceMetric: - The distance metric to use (integer, 0-6, see flags above) + The distance metric to use (integer 0-5) + + :enum: + + :0: + Manhattan Distance + + :1: + Euclidean Distance (default) + + :2: + Squared Euclidean Distance + + :3: + Minkowski Max Distance + + :4: + Minkowski Min Distance + + :5: + Symmetric Kullback Leibler Divergance + .. :6: + .. Cosine Distance :message fitTransform: - :arg sourceDataSet: Source data, or the DataSet name + :arg sourceDataSet: Source DataSet - :arg destDataSet: Destination data, or the DataSet name + :arg destDataSet: Destination DataSet :arg action: Run when done - Fit the model to a :fluid-obj:`DataSet` and write the new projected data to a destination FluidDataSet. + Fit the model to a :fluid-obj:`DataSet` and write the new projected data to a destination DataSet. diff --git a/doc/MLPClassifier.rst b/doc/MLPClassifier.rst index c7ac0b9..6ed5fec 100644 --- a/doc/MLPClassifier.rst +++ b/doc/MLPClassifier.rst @@ -1,69 +1,85 @@ :digest: Classification with a multi-layer perceptron :species: data :sc-categories: Machine learning -:sc-related: Classes/FluidMLPRegressor, Classes/FluidDataSet +:sc-related: Classes/FluidMLPRegressor, Classes/FluidDataSet, Classes/FluidLabelSet :see-also: -:description: Perform classification between a :fluid-obj:`DataSet` and a :fluid-obj:`LabelSet` using a Multi-Layer Perception neural network. +:description: + Perform classification between a :fluid-obj:`DataSet` and a :fluid-obj:`LabelSet` using a Multi-Layer Perception neural network. -:control hidden: +:discussion: - An ``Classes/Array`` that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each). + For a thorough explanation of how this object works and more information on the parameters, visit the page on **MLP Training** (https://learn.flucoma.org/learn/mlp-training) and **MLP Parameters** (https://learn.flucoma.org/learn/mlp-parameters). + +:control hiddenLayers: + + An array of numbers that specifies the internal structure of the neural network. Each number in the list represents one hidden layer of the neural network, the value of which is the number of neurons in that layer. Changing this will reset the neural network, clearing any learning that has happened. :control activation: - The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1). + An integer indicating which activation function each neuron in the hidden layer(s) will use. Changing this will reset the neural network, clearing any learning that has happened. The options are: + + :enum: + + :0: + **identity** (the output range can be any value) + + :1: + **sigmoid** (the output will always range be greater than 0 and less than 1) + + :2: + **relu** (the output will always be greater than or equal to 0) + + :3: + **tanh** (the output will always be greater than -1 and less than 1) :control maxIter: - The maximum number of iterations to use in training. + The number of epochs to train for when ``fit`` is called on the object. An epoch is consists of training on all the data points one time. :control learnRate: - The learning rate of the network. Start small, increase slowly. + A scalar for indicating how much the neural network should adjust its internal parameters during training. This is the most important parameter to adjust while training a neural network. :control momentum: - The training momentum, default 0.9 + A scalar that applies a portion of previous adjustments to a current adjustment being made by the neural network during training. :control batchSize: - The training batch size. + The number of data points to use in between adjustments of the MLP's internal parameters during training. :control validation: - The fraction of the DataSet size to hold back during training to validate the network against. - + A percentage (represented as a decimal) of the data points to randomly select, set aside, and not use for training (this "validation set" is reselected on each ``fit``). These points will be used after each epoch to check how the neural network is performing. If it is found to be no longer improving, training will stop, even if a ``fit`` has not reached its ```maxIter`` number of epochs. :message fit: :arg sourceDataSet: Source data - :arg targetLabelSet: Target data - - :arg action: Function to run when training is complete + :arg targetLabelSet: Target labels - Train the network to map between a source :fluid-obj:`DataSet` and a target :fluid-obj:`LabelSet` + :arg action: Function to run when complete. This function will be passed the current error as its only argument. + + Train the network to map between a source :fluid-obj:`DataSet` and target :fluid-obj:`LabelSet` :message predict: :arg sourceDataSet: Input data - :arg targetLabelSet: Output data + :arg targetLabelSet: :fluid-obj:`LabelSet` to write the predicted labels into :arg action: Function to run when complete - Apply the learned mapping to a :fluid-obj:`DataSet` (given a trained network) + Predict labels for a :fluid-obj:`DataSet` (given a trained network) :message predictPoint: :arg sourceBuffer: Input point - :arg targetBuffer: Output point - - :arg action: A function to run when complete + :arg action: A function to run when complete. This function will be passed the predicted label. - Apply the learned mapping to a single data point in a |buffer| + Predict a label for a single data point in a |buffer| :message clear: diff --git a/doc/MLPRegressor.rst b/doc/MLPRegressor.rst index 937625f..5041a14 100644 --- a/doc/MLPRegressor.rst +++ b/doc/MLPRegressor.rst @@ -3,49 +3,67 @@ :sc-categories: Machine learning :sc-related: Classes/FluidMLPClassifier, Classes/FluidDataSet :see-also: -:description: Perform regression between :fluid-obj:`DataSet`\s using a Multi-Layer Perception neural network. +:description: + Perform regression between :fluid-obj:`DataSet`\s using a Multi-Layer Perception neural network. -:control hidden: +:discussion: - An ``Classes/Array`` that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each). + For a thorough explanation of how this object works and more information on the parameters, visit the page on **MLP Training** (https://learn.flucoma.org/learn/mlp-training) and **MLP Parameters** (https://learn.flucoma.org/learn/mlp-parameters). + +:control hiddenLayers: + + An array of numbers that specifies the internal structure of the neural network. Each number in the list represents one hidden layer of the neural network, the value of which is the number of neurons in that layer. Changing this will reset the neural network, clearing any learning that has happened. :control activation: - The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1) + An integer indicating which activation function each neuron in the hidden layer(s) will use. Changing this will reset the neural network, clearing any learning that has happened. The options are: + + :enum: + + :0: + **identity** (the output range can be any value) + + :1: + **sigmoid** (the output will always range be greater than 0 and less than 1) + + :2: + **relu** (the output will always be greater than or equal to 0) + + :3: + **tanh** (the output will always be greater than -1 and less than 1) :control outputActivation: - The activation function to use for the output layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1) + An integer indicating which activation function each neuron in the output layer will use. Options are the same as ``activation``. Changing this will reset the neural network, clearing any learning that has happened. :control tapIn: - The layer whose input is used to predict and predictPoint. It is 0 counting, where the default of 0 is the input layer, and 1 would be the first hidden layer, and so on. + The index of the layer to use as input to the neural network for ``predict`` and ``predictPoint`` (zero counting). The default of 0 is the first layer (the original input layer), 1 is the first hidden layer, etc. This can be used to access different parts of a trained neural network such as the encoder or decoder of an autoencoder (https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726). :control tapOut: - The layer whose output to return. It is counting from 0 as the input layer, and 1 would be the first hidden layer, and so on. The default of -1 is the last layer of the whole network. + The index of the layer to use as output of the neural network for ``predict`` and ``predictPoint`` (zero counting). The default of -1 is the last layer (the original output layer). This can be used to access different parts of a trained neural network such as the encoder or decoder of an autoencoder (https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726). :control maxIter: - The maximum number of iterations to use in training. + The number of epochs to train for when ``fit`` is called on the object. An epoch is consists of training on all the data points one time. :control learnRate: - The learning rate of the network. Start small, increase slowly. + A scalar for indicating how much the neural network should adjust its internal parameters during training. This is the most important parameter to adjust while training a neural network. :control momentum: - The training momentum, default 0.9 + A scalar that applies a portion of previous adjustments to a current adjustment being made by the neural network during training. :control batchSize: - The training batch size. + The number of data points to use in between adjustments of the MLP's internal parameters during training. :control validation: - The fraction of the DataSet size to hold back during training to validate the network against. - + A percentage (represented as a decimal) of the data points to randomly select, set aside, and not use for training (this "validation set" is reselected on each ``fit``). These points will be used after each epoch to check how the neural network is performing. If it is found to be no longer improving, training will stop, even if a ``fit`` has not reached its ```maxIter`` number of epochs. :message fit: @@ -53,8 +71,8 @@ :arg targetDataSet: Target data - :arg action: Function to run when training is complete - + :arg action: Function to run when complete. This function will be passed the current error as its only argument. + Train the network to map between a source and target :fluid-obj:`DataSet` :message predict: diff --git a/example-code/sc/MDS.scd b/example-code/sc/MDS.scd index 1185a77..b0683ef 100644 --- a/example-code/sc/MDS.scd +++ b/example-code/sc/MDS.scd @@ -1,117 +1,56 @@ - code:: -//Preliminaries: we want some audio, a couple of FluidDataSets, some Buffers, a FluidStandardize and a FluidMDS -( -~audiofile = FluidFilesPath("Tremblay-ASWINE-ScratchySynth-M.wav"); -~raw = FluidDataSet(s); -~standardized = FluidDataSet(s); -~reduced = FluidDataSet(s); -~audio = Buffer.read(s,~audiofile); -~mfcc_feature = Buffer.new(s); -~stats = Buffer.alloc(s, 7, 12); -~standardizer = FluidStandardize(s); -~mds = FluidMDS(s); -) - -// Load audio and run an mfcc analysis, which gives us 13 points (we'll throw the 0th away) -( -~audio = Buffer.read(s,~audiofile); -FluidBufMFCC.process(s,~audio, features: ~mfcc_feature); -) +~src = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav")); -// Divide the time series in 100, and take the mean of each segment and add this as a point to -// the 'raw' FluidDataSet ( -{ - var trig = LocalIn.kr(1, 1); - var buf = LocalBuf(12, 1); - var count = PulseCount.kr(trig) - 1; - var chunkLen = (~mfcc_feature.numFrames / 100).asInteger; - var stats = FluidBufStats.kr( - source: ~mfcc_feature, startFrame: count * chunkLen, - startChan:1, numFrames: chunkLen, stats: ~stats, trig: trig, blocking:1 - ); - var rd = BufRd.kr(12, ~stats, DC.kr(0), 0, 1); - var bufWr, dsWr; - 12.do{|i| - bufWr = BufWr.kr(rd[i], buf, DC.kr(i)); - }; - dsWr = FluidDataSetWr.kr(~raw, buf: buf, idNumber: count, trig: Done.kr(stats),blocking:1); - LocalOut.kr(Done.kr(dsWr)); - FreeSelf.kr(count - 99); - Poll.kr(trig,(100-count)); -}.play; +~features = Buffer(s); +FluidBufMFCC.processBlocking(s,~src,features:~features,startCoeff:1); +~ds = FluidDataSet(s).fromBuffer(~features); +FluidMDS(s).fitTransform(~ds,~ds); +FluidNormalize(s).fitTransform(~ds,~ds); +~ds.dump({ + arg dict; + {FluidPlotter(bounds:Rect(0,0,800,800),dict:dict)}.defer; +}); ) -// wait for the count to reaches 0 in the post window. -//First standardize our DataSet, so that the MFCC dimensions are on comensurate scales -//Then apply the MDS in-place on the standardized data to get 2 dimensions, using a Euclidean distance metric -//Download the DataSet contents into an array for plotting -( -~reducedarray = Array.new(100); -~standardizer.fitTransform(~raw, ~standardized); -~mds.fitTransform(~standardized, ~reduced, action:{ - ~reduced.dump{|x| 100.do{|i| - ~reducedarray.add(x["data"][i.asString]) -}}}); -) +:: +strong::Comparing Distance Measures:: -//Visualise the 2D projection of our original 12D data -( -d = ~reducedarray.flop.deepCollect(1, { |x| x.normalize}); -w = Window("scatter", Rect(128, 64, 200, 200)); -w.drawFunc = { - Pen.use { - d[0].size.do{|i| - var x = (d[0][i]*200); - var y = (d[1][i]*200); - var r = Rect(x,y,5,5); - Pen.fillColor = Color.blue; - Pen.fillOval(r); - } - } -}; -w.refresh; -w.front; -) +Just looking at these plots won't really reveal the differences between these distance measures--the best way to see which might be best is to test them on your own data and listen to the musical differences they create! +code:: -//we can change the distance computation -~mds.distanceMetric = FluidMDS.kl; +~src = Buffer.read(s,FluidFilesPath("Nicol-LoopE-M.wav")); -//recompute the reduction and recover the points ( -~reducedarray2 = Array.new(100); -~mds.fitTransform(~standardized, ~reduced, action:{ - ~reduced.dump{|x| 100.do{|i| - ~reducedarray2.add(x["data"][i.asString]) - }}}); +~features = Buffer(s); +FluidBufMFCC.processBlocking(s,~src,features:~features,startCoeff:1); +~ds = FluidDataSet(s).fromBuffer(~features); + +fork({ + var win = Window("Comparing Distance Measures",Rect(0,0,1200,800)); + ["Manhattan","Euclidean","Squared Euclidean","Minkowski Max","Minkowski Min","Symmetric Kullback Leibler"].do{ + arg name, dist_measure; + var ds_transformed = FluidDataSet(s); + "computing distance measure: % %".format(dist_measure, name).postln; + FluidMDS(s,2,dist_measure).fitTransform(~ds,ds_transformed); + FluidNormalize(s).fitTransform(ds_transformed,ds_transformed); + ds_transformed.dump({ + arg dict; + defer{ + var x = (dist_measure * 400) % win.bounds.width; + var y = (dist_measure / 3).floor * 400; + FluidPlotter(win,Rect(x + 5,y + 5,390,390),dict); + UserView(win,Rect(x + 5,y + 5,390,390)) + .drawFunc_{ + Pen.stringAtPoint(name + "Distance",Point(10,10),color:Color.red); + }; + }; + }); + s.sync; + }; + win.front; +},AppClock); ) -//draw the new projection in red above the other -//Visualise the 2D projection of our original 12D data -( -d = ~reducedarray.flop.deepCollect(1, { |x| x.normalize}); -e = ~reducedarray2.flop.deepCollect(1, { |x| x.normalize}); -w.drawFunc = { - Pen.use { - d[0].size.do{|i| - var x = (d[0][i]*200); - var y = (d[1][i]*200); - var r = Rect(x,y,5,5); - Pen.fillColor = Color.blue; - Pen.fillOval(r); - }; - e[0].size.do{|i| - var x = (e[0][i]*200); - var y = (e[1][i]*200); - var r = Rect(x,y,5,5); - Pen.fillColor = Color.red; - Pen.fillOval(r); - } - } -}; -w.refresh; -w.front; -) :: diff --git a/example-code/sc/MLPClassifier.scd b/example-code/sc/MLPClassifier.scd index 5f437f6..304e0e2 100644 --- a/example-code/sc/MLPClassifier.scd +++ b/example-code/sc/MLPClassifier.scd @@ -1,104 +1,264 @@ - +strong::Real-Time Tweaking Parameters:: code:: + +s.boot; + +// some audio files to classify ( -~classifier=FluidMLPClassifier(s).hidden_([6]).activation_(FluidMLPClassifier.tanh).maxIter_(1000).learnRate_(0.1).momentum_(0.1).batchSize_(50).validation_(0); -~sourcedata= FluidDataSet(s); -~labels = FluidLabelSet(s); -~testdata = FluidDataSet(s); -~predictedlabels = FluidLabelSet(s); +~tbone = Buffer.read(s,FluidFilesPath("Olencki-TenTromboneLongTones-M.wav"),27402,257199); +~oboe = Buffer.read(s,FluidFilesPath("Harker-DS-TenOboeMultiphonics-M.wav"),27402,257199); ) -//Make some clumped 2D points and place into a DataSet + +// listen to these short bits +~tbone.play; +~oboe.play; + +// create a dataSet of pitch and pitch confidence analyses (and normalize them) ( -~centroids = [[0.5,0.5],[-0.5,0.5],[0.5,-0.5],[-0.5,-0.5]]; -~categories = [\red,\orange,\green,\blue]; -~trainingset = Dictionary(); -~labeldata = Dictionary(); -4.do{ |i| - 64.do{ |j| - ~trainingset.put("mlpclass"++i++\_++j, ~centroids[i].collect{|x| x.gauss(0.5/3)}); - ~labeldata.put("mlpclass"++i++\_++j,[~categories[i]]); - } +~dataSet = FluidDataSet(s); +~labelSet = FluidLabelSet(s); +~pitch_features = Buffer(s); +~point = Buffer(s); +[~tbone,~oboe].do{ + arg src, instr_id; + FluidBufPitch.processBlocking(s,src,features:~pitch_features,windowSize:2048); + 252.do{ // I happen to know there are 252 frames in this buffer + arg idx; + var id = "slice-%".format((instr_id*252)+idx); + var label = ["trombone","oboe"][instr_id]; + FluidBufFlatten.processBlocking(s,~pitch_features,idx,1,destination:~point); + ~dataSet.addPoint(id,~point); + ~labelSet.addLabel(id,label); + }; }; -~sourcedata.load(Dictionary.with(*[\cols->2,\data->~trainingset]),action:{~sourcedata.print}); -~labels.load(Dictionary.with(*[\cols->1,\data->~labeldata]),action:{~labels.print}); +FluidNormalize(s).fitTransform(~dataSet,~dataSet); +~dataSet.print; +~labelSet.print; ) -//Fit the classifier to the example DataSet and labels, and then run prediction on the test data into our mapping label set -~classifier.fit(~sourcedata,~labels,action:{|loss| ("Trained"+loss).postln}); +( +// take a look if you want: quite clear separation for the neural network to learn (blue will be trombone and orange will be oboe) +~dataSet.dump({ + arg datadict; + ~labelSet.dump({ + arg labeldict; + defer{ + FluidPlotter(dict:datadict).categories_(labeldict); + }; + }); +}); +) -//make some test data ( -~testset = Dictionary(); -4.do{ |i| - 64.do{ |j| - ~testset.put("mlpclass_test"++i++\_++j, ~centroids[i].collect{|x| x.gauss(0.5/3)}); - } +// make a neural network +~mlp = FluidMLPClassifier(s,[3],activation:FluidMLPClassifier.sigmoid,maxIter:20,learnRate:0.01,batchSize:1,validation:0.1); + +// make a flag that can later be set to false +~continuous_training = true; + +// a recursive function for training +~train = { + ~mlp.fit(~dataSet,~labelSet,{ + arg error; + "current error: % ".format(error.asStringff(5)).post; + {"*".post;} ! (error*100).asInteger; + "".postln; + if(~continuous_training){~train.()} + }); }; -~testdata.load(Dictionary.with(*[\cols->2,\data->~testset]),action:{~testdata.print}); + +// start training +~train.(); ) -//Run the test data through the network, into the predicted labelset -~classifier.predict(~testdata,~predictedlabels,action:{"Test complete".postln}); +// you can make adjustments while it's recursively calling itself: +~mlp.learnRate_(0.02); // won't reset the neural network +~mlp.batchSize_(2); // won't reset the neural network +~mlp.maxIter_(50); // won't reset the neural network +~mlp.validation_(0.05); // won't reset the neural network +~mlp.momentum_(0.95); // won't reset the neural network + +~mlp.hiddenLayers_([2]); // *will* reset the neural network +~mlp.activation_(FluidMLPClassifier.tanh); // *will* reset the neural network -//get labels from server -~predictedlabels.dump(action:{|d| ~labelsdict = d["data"]; ~labelsdict.postln}); +// when the loss has decreased and then leveled out, stop the recursive training: +~continuous_training = false; -//Visualise: we're hoping to see colours neatly mapped to quandrants... +// make some predictions ( -c = Dictionary(); -c.add("red"->Color.red); -c.add("blue"->Color.blue); -c.add("green"->Color.green); -c.add("orange"->Color.new255(255, 127, 0)); -e = 200 * ((~centroids + 1) * 0.5).flatten(1).unlace; -w = Window("scatter", Rect(128, 64, 200, 200)); -w.drawFunc = { - Pen.use { - ~testset.keysValuesDo{|k,v| - var x = v[0].linlin(-1,1,200,0).asInteger; - var y = v[1].linlin(-1,1,200,0).asInteger; - var r = Rect(x,y,5,5); - Pen.fillColor = c.at(~labelsdict[k][0]); - Pen.fillOval(r); - } - } -}; -w.refresh; -w.front; -) +~predictions = FluidLabelSet(s); +~mlp.predict(~dataSet,~predictions); +~predictions.dump({ + arg predictions; + ~labelSet.dump({ + arg labels; + var wrong = 0; + var total = predictions["data"].size; + labels["data"].keysValuesDo{ + arg k, v; + var label = v[0]; + var prediction = predictions["data"][k][0]; + "key: %\t\tlabel: %\t\tprediction: %".format(k,label.padRight(8),prediction.padRight(8)).post; + if(label != prediction){ + wrong = wrong + 1; + "\t\t* wrong".post; + }; + "".postln; + }; -// single point transform on arbitrary value -~inbuf = Buffer.loadCollection(s,0.5.dup); -~classifier.predictPoint(~inbuf,{|x|x.postln;}); + "\n% wrong / % total".format(wrong,total).postln; + "% percent correct".format(((1-(wrong/total)) * 100).round(0.01)).postln; + "of course it should get most all these correct, this is the data it trained on!".postln; + }); +}); +) :: -subsection::Querying in a Synth -This is the equivalent of code::predictPoint::, but wholly on the server +strong::Training-Testing Split:: code:: + +// two audio buffers to use as separate classes ( -{ - var trig = Impulse.kr(5); - var point = WhiteNoise.kr(1.dup); - var inputPoint = LocalBuf(2); - var outputPoint = LocalBuf(1); - Poll.kr(trig, point, [\pointX,\pointY]); - point.collect{ |p,i| BufWr.kr([p],inputPoint,i)}; - ~classifier.kr(trig,inputPoint,outputPoint); - Poll.kr(trig,BufRd.kr(1,outputPoint,0,interpolation:0),\cluster); - Silent.ar; -}.play; +~buffers = [ + Buffer.readChannel(s,FluidFilesPath("Tremblay-AaS-AcBassGuit-Melo-M.wav"),channels:[0]), + Buffer.readChannel(s,FluidFilesPath("Tremblay-CEL-GlitchyMusicBoxMelo.wav"),channels:[0]) +]; +) + +// strip any silence +( +fork{ + ~buffers = ~buffers.collect{ + arg src; + var indices = Buffer(s); + var temp = Buffer(s); + FluidBufAmpGate.processBlocking(s,src,indices:indices,onThreshold:-30,offThreshold:-35,minSliceLength:4410); + indices.loadToFloatArray(action:{ + arg fa; + var curr = 0; + fa.clump(2).do{ + arg arr; + var start = arr[0]; + var num = arr[1] - start; + FluidBufCompose.processBlocking(s,src,start,num,destination:temp,destStartFrame:curr); + curr = curr + num; + }; + indices.free; + src.free; + }); + temp; + }; + s.sync; + "done stripping silence".postln; +} +) + +// take a look to see that the silence is stripped +( +~win = Window("FluidWaveform Test",Rect(0,0,1000,500)); +~fws = ~buffers.collect{arg buf; FluidWaveform(buf, standalone: false)}; +~win.view.layout = VLayout(~fws[0], ~fws[1]); +~win.front; +) + +// analysis +( +fork{ + var features = Buffer(s); + var flat = Buffer(s); + var counter = 0; + ~trainingData = FluidDataSet(s); + ~trainingLabels = FluidLabelSet(s); + ~testingData = FluidDataSet(s); + ~testingLabels = FluidLabelSet(s); + + ~buffers.do{ + arg buf, buffer_i; + FluidBufMFCC.processBlocking(s,buf,features:features,startCoeff:1); + s.sync; + features.numFrames.do{ + arg i; + var id = "analysis-%".format(counter); + FluidBufFlatten.processBlocking(s,features,i,1,destination:flat); + if(0.8.coin){ // randomly: 80% of the time add to training data, 20% add to testing data + ~trainingData.addPoint(id,flat); + ~trainingLabels.addLabel(id,buffer_i); + }{ + ~testingData.addPoint(id,flat); + ~testingLabels.addLabel(id,buffer_i); + }; + counter = counter + 1; + }; + }; + + s.sync; + + ~trainingData.print; + ~trainingLabels.print; + ~testingData.print; + ~testingLabels.print; +}; +) + +// train! +( +~mlp = FluidMLPClassifier(s,[7],activation:FluidMLPClassifier.sigmoid,maxIter:100,learnRate:0.01,batchSize:1,validation:0.1); + +~mlp.fit(~trainingData,~trainingLabels,{ + arg error; + "current error: % ".format(error).postln; +}); ) -// to sonify the output, here are random values alternating quadrant. +// test! +( +~predictions = FluidLabelSet(s); +~mlp.predict(~testingData,~predictions,{ + ~predictions.dump({ + arg yhat; + ~testingLabels.dump({ + arg labels; + var wrong = 0; + labels["data"].keysValuesDo{ + arg k, v; + var label = v[0]; + var pred = yhat["data"][k][0]; + "id: %\t\tlabel: %\t\tprediction: %".format(k.padRight(14),label,pred).post; + if(pred != label){ + "\t\t* wrong".post; + wrong = wrong + 1; + }; + "".postln; + }; + "% / % were predicted incorrectly".format(wrong,labels["data"].size).postln; + }); + }); +}); +) + +:: +strong::Predict classes entirely on the server:: +code:: +~mlp = FluidMLPClassifier(s); + +// load a model that has been pre-trained to classify between a tone and noise, simple, i know, but... +~mlp.read(FluidFilesPath("../../Resources/bufToKrExample.json")); + +// can be used to demonstrate that... ( { - var trig = Impulse.kr(MouseX.kr(0,1).exprange(0.5,ControlRate.ir /2).poll(trig: 2,label: "Query Frequency")); - var step = Stepper.kr(trig,max:3); - var point = TRand.kr(-0.1, [0.1, 0.1], trig) + [step.mod(2).linlin(0,1,-0.6,0.6),step.div(2).linlin(0,1,-0.6,0.6)] ; - var inputPoint = LocalBuf(2); - var outputPoint = LocalBuf(1); - point.collect{|p,i| BufWr.kr([p],inputPoint,i)}; - ~classifier.kr(trig,inputPoint,outputPoint); - SinOsc.ar((BufRd.kr(1,outputPoint,0,interpolation:0) + 69).midicps,mul: 0.1) + var input_buf = LocalBuf(7); + var out_buf = LocalBuf(1); + var sig = Select.ar(ToggleFF.kr(Dust.kr(1)),[SinOsc.ar(440),PinkNoise.ar]); + var analysis = FluidSpectralShape.kr(sig); + FluidKrToBuf.kr(analysis,input_buf); + + // the output prediction is written into a buffer + ~mlp.kr(Impulse.kr(30),input_buf,out_buf); + + // and FluidBufToKr can be used to read the prediction out into a control rate stream + FluidBufToKr.kr(out_buf).poll; + + sig.dup * -30.dbamp }.play; ) :: diff --git a/example-code/sc/MLPRegressor.scd b/example-code/sc/MLPRegressor.scd index fbac638..7bb4da1 100644 --- a/example-code/sc/MLPRegressor.scd +++ b/example-code/sc/MLPRegressor.scd @@ -1,93 +1,386 @@ - +strong::Control a multi-parameter Synth from a 2D space:: code:: -//Make a simple mapping between a ramp and a sine cycle, test with an exponentional ramp +/* + +1. Run block of code 1 to make some datasets and buffers for interfacing with the neural network +2. Run block of code 2 to make the actual synth that we'll be controlling +3. Run block of code 3 to make the GUI that will let us (a) add input-output example pairs, (b) use those +example pairs to train the neural network, and (c) make predictions with the neural network +4. Use the multislider to change the synthesis parameters to a sound you like +5. Choose a position in the 2D slider where you want this sound to be placed. +6. Now that you have your input (2D position) paired with you output (multislider positions), click +"Add Points" to add these to their respective data sets. +7. Repeat steps 4-6 about 5 times. Make sure to keep moving the 2D slider to new positions for each sound. +8. Train the neural network by hitting "Train" a bunch of times. For this task you can probably aim to get +the "loss" down around 0.001. +9. Hit the "Not Predicting" toggle so that it turns into predicting mode and start moving around the 2D slider! + +*/ + +// 1. datasets and buffers for interfacing with the neural network ( -~source = FluidDataSet(s); -~target = FluidDataSet(s); -~test = FluidDataSet(s); -~output = FluidDataSet(s); -~tmpbuf = Buffer.alloc(s,1); -~regressor=FluidMLPRegressor(s).hidden_([2]).activation_(FluidMLPRegressor.tanh).outputActivation_(FluidMLPRegressor.tanh).maxIter_(1000).learnRate_(0.1).momentum_(0.1).batchSize_(1).validation_(0); +~ds_input = FluidDataSet(s); +~ds_output = FluidDataSet(s); +~x_buf = Buffer.alloc(s,2); +~y_buf = Buffer.loadCollection(s,{rrand(0.0,1.0)} ! 10); +~nn = FluidMLPRegressor(s,[7],FluidMLPRegressor.sigmoid,FluidMLPRegressor.sigmoid,learnRate:0.1,batchSize:1,validation:0); ) -//Make source, target and test data +// 2. the synth that we'll be contorlling (has 10 control parameters, expects them to be between 0 and 1) ( -~sourcedata = 128.collect{|i|i/128}; -~targetdata = 128.collect{|i| sin(2*pi*i/128) }; -~testdata = 128.collect{|i|(i/128)**2}; - -~source.load( - Dictionary.with( - *[\cols -> 1,\data -> Dictionary.newFrom( - ~sourcedata.collect{|x, i| [i.asString, [x]]}.flatten)]); -); - -~target.load( -d = Dictionary.with( - *[\cols -> 1,\data -> Dictionary.newFrom( - ~targetdata.collect{|x, i| [i.asString, [x]]}.flatten)]); -); - -~test.load( - Dictionary.with( - *[\cols -> 1,\data -> Dictionary.newFrom( - ~testdata.collect{|x, i| [i.asString, [x]]}.flatten)]); -); - -~targetdata.plot; -~source.print; -~target.print; -~test.print; +~synth = { + var val = FluidBufToKr.kr(~y_buf); + var osc1, osc2, feed1, feed2, base1=69, base2=69, base3 = 130; + #feed2,feed1 = LocalIn.ar(2); + osc1 = MoogFF.ar(SinOsc.ar((((feed1 * val[0]) + val[1]) * base1).midicps,mul: (val[2] * 50).dbamp).atan,(base3 - (val[3] * (FluidLoudness.kr(feed2, 1, 0, hopSize: 64)[0].clip(-120,0) + 120))).lag(128/44100).midicps, val[4] * 3.5); + osc2 = MoogFF.ar(SinOsc.ar((((feed2 * val[5]) + val[6]) * base2).midicps,mul: (val[7] * 50).dbamp).atan,(base3 - (val[8] * (FluidLoudness.kr(feed1, 1, 0, hopSize: 64)[0].clip(-120,0) + 120))).lag(128/44100).midicps, val[9] * 3.5); + Out.ar(0,LeakDC.ar([osc1,osc2],mul: 0.1)); + LocalOut.ar([osc1,osc2]); +}.play; ) -// Now make a regressor and fit it to the source and target, and predict against test -//grab the output data whilst we're at it, so we can inspect +// 3. a gui for creating input-output example pairs, then training, and then making predicitons. +( +~counter = 0; +~predicting = false; +~prediction_buf = Buffer.alloc(s,10); +~win = Window("MLP Regressor",Rect(0,0,1000,400)); + +~multisliderview = MultiSliderView(~win,Rect(0,0,400,400)) +.size_(10) +.elasticMode_(true) +.action_({ + arg msv; + ~y_buf.setn(0,msv.value); +}); + +Slider2D(~win,Rect(400,0,400,400)) +.action_({ + arg s2d; + [s2d.x,s2d.y].postln; + ~x_buf.setn(0,[s2d.x,s2d.y]); // put the x position directly into the buffer + if(~predicting,{ + ~nn.predictPoint(~x_buf,~y_buf,{ + + // the synth actually reads directly out of the ~y_buf, but here we get it back to the language so we can display + // it in the multislider + ~y_buf.getn(0,10,{ + arg prediction_values; + {~multisliderview.value_(prediction_values)}.defer; + }); + }); + }); +}); + +Button(~win,Rect(800,0,200,20)) +.states_([["Add Points"]]) +.action_({ + arg but; + var id = "example-%".format(~counter); + ~ds_input.addPoint(id,~x_buf); + ~ds_output.addPoint(id,~y_buf); + ~counter = ~counter + 1; + + ~ds_input.print; + ~ds_output.print; +}); + +Button(~win,Rect(800,20,200,20)) +.states_([["Train"]]) +.action_({ + arg but; + ~nn.fit(~ds_input,~ds_output,{ + arg loss; + "loss: %".format(loss).postln; + }); +}); + +Button(~win,Rect(800,40,200,20)) +.states_([["Not Predicting",Color.yellow,Color.black],["Is Predicting",Color.green,Color.black]]) +.action_({ + arg but; + ~predicting = but.value.asBoolean; +}); + +~win.front; +) +:: +strong::Making Predicitons on the Server (Predicting Frequency Modulation parameters from MFCC analyses):: +code:: -// run this to train the network for up to 1000(max epochs to map source to target. fit() returns loss. If this is -1, then training has failed. Run until the printed error is satisfactory to you -~regressor.fit(~source, ~target, {|x|x.postln;}); +( +~inputs = FluidDataSet(s); +~outputs = FluidDataSet(s); +~analysis_buf = Buffer.alloc(s,13); +~params_buf = Buffer.loadCollection(s,[800,300,3]); // some inital parameters to hear it turn on +) -//you can change parameters of the MLPregressor with setters -~regressor.learnRate = 0.01; -~regressor.momentum = 0; -~regressor.validation= 0.2; +// for playing the fm and getting the data +( +~synth = { + var params = FluidBufToKr.kr(~params_buf,numFrames:3); // get the synth params out of the buffer + var sig = SinOsc.ar(params[0] + SinOsc.ar(params[1],0,params[1]*params[2]),0,-40.dbamp); // fm synthesis + var analysis = FluidMFCC.kr(sig,13,startCoeff:1); // get the mfcc analysis of the sound + FluidKrToBuf.kr(analysis,~analysis_buf,krNumChans:13); // write it into a buffer + sig.dup; +}.play; +) +// add 100 input-output example pairs ( -~outputdata = Array(128); -~regressor.predict(~test, ~output, action:{ - ~output.dump{|x| 128.do{|i| - ~outputdata.add(x["data"][i.asString][0]) - }}; +fork{ + 100.do{ + arg i; + var cfreq = exprand(60,2000); + var mfreq = rrand(30.0,cfreq); + var index = rrand(0.0,20.0); + var params = [cfreq,mfreq,index]; + ~params_buf.setn(0,params); + s.sync; + ~outputs.addPoint(i,~params_buf); + 0.1.wait; + ~inputs.addPoint(i,~analysis_buf); + 0.1.wait; + + "adding point: %\tparams: %".format(i.asStringff(3),params).postln; + }; + + ~in_scaler = FluidNormalize(s).fitTransform(~inputs,~inputs); + ~out_scaler = FluidNormalize(s).fitTransform(~outputs,~outputs); + + s.sync; + + ~synth.free; } -); ) -//We should see a single cycle of a chirp. If not, fit a little more epochs -~outputdata.plot; +// train +( +~mlp = FluidMLPRegressor(s,[8],FluidMLPRegressor.sigmoid,FluidMLPRegressor.sigmoid,maxIter:100,learnRate:0.1,batchSize:5,validation:0.1); +~continuously_train = true; +~train = { + ~mlp.fit(~inputs,~outputs,{ + arg error; + "current error: % ".format(error.round(0.0001)).post; + {"*".post} ! (error * 100).asInteger; + "".postln; + if(~continuously_train){~train.()}; + }); +}; +~train.(); +) -// single point transform on arbitrary value -~inbuf = Buffer.loadCollection(s,[0.5]); -~outbuf = Buffer.new(s); -~regressor.predictPoint(~inbuf,~outbuf,{|x|x.postln;x.getn(0,1,{|y|y.postln;};)}); -:: +// tweak some params during the training maybe: +~mlp.validation_(0.1); +~mlp.learnRate_(0.01); +~mlp.maxIter_(100); +~mlp.batchSize_(1); +~mlp.hiddenLayers_([10,5]); + +// once it levels out: +~continuously_train = false; + +// see how it fares, use the mouse to hear the balance +~musicbox = Buffer.readChannel(s,FluidFilesPath("Tremblay-CEL-GlitchyMusicBoxMelo.wav"),channels:[0]); + +( +~synth = { + arg test_buf; + var test_sig = PlayBuf.ar(1,test_buf,BufRateScale.kr(test_buf),loop:1); + var test_sig_loudness = FluidLoudness.kr(test_sig)[0]; + var mfccs = FluidMFCC.kr(test_sig,13,startCoeff:1); + var mfcc_buf = LocalBuf(13); + var mfcc_buf_scaled = LocalBuf(13); + var prediction = LocalBuf(3); + var prediction_scaled = LocalBuf(3); + var trig = Impulse.kr(30); + var params, sig; + mfccs = FluidStats.kr(mfccs,20)[0]; + FluidKrToBuf.kr(mfccs,mfcc_buf); + ~in_scaler.kr(trig,mfcc_buf,mfcc_buf_scaled); + ~mlp.kr(trig,mfcc_buf_scaled,prediction); + ~out_scaler.kr(trig,prediction,prediction_scaled,invert:1); + params = FluidBufToKr.kr(prediction_scaled); + sig = SinOsc.ar(params[0] + SinOsc.ar(params[1],0,params[1]*params[2]),0,test_sig_loudness.dbamp); + [test_sig * (1-MouseX.kr),sig * MouseX.kr]; +}.play(args:[\test_buf,~musicbox]); +) + +~oboe = Buffer.readChannel(s,FluidFilesPath("Harker-DS-TenOboeMultiphonics-M.wav"),channels:[0]); +~synth.set(\test_buf,~oboe); + +~tbone = Buffer.readChannel(s,FluidFilesPath("Olencki-TenTromboneLongTones-M.wav"),channels:[0]); +~synth.set(\test_buf,~tbone); -subsection:: Querying in a Synth +~voice = Buffer.readChannel(s,FluidFilesPath("Tremblay-AaS-VoiceQC-B2K-M.wav"),channels:[0]); +~synth.set(\test_buf,~voice); -This is the equivalent of calling code::predictPoint::, except wholly on the server +~drums = Buffer.readChannel(s,FluidFilesPath("Nicol-LoopE-M.wav"),channels:[0]); +~synth.set(\test_buf,~drums); +:: +strong::Using an Autoencoder to Create a Latent Space of Wavetables:: code:: +// thansk to Timo Hoogland for the inspiration! + +// boot the server +s.boot; + +// the source material wavetables will come from here (also try out other soundfiles!) +~buf = Buffer.read(s,FluidFilesPath("Olencki-TenTromboneLongTones-M.wav")); + +// remove the silent parts of the buffer +( +var indices = Buffer(s); +FluidBufAmpGate.processBlocking(s,~buf,indices:indices,onThreshold:-30,offThreshold:-40,minSliceLength:0.1*s.sampleRate,minSilenceLength:0.1*s.sampleRate,rampDown:0.01*s.sampleRate); +indices.loadToFloatArray(action:{ + arg fa; + var current_frame = 0; + var temp = Buffer(s); + fa = fa.clump(2); + fa.do{ + arg arr, i; + var startFrame = arr[0]; + var numFrames = arr[1] - startFrame; + FluidBufCompose.processBlocking(s,~buf,startFrame,numFrames,destination:temp,destStartFrame:current_frame); + current_frame = current_frame + numFrames; + }; + s.sync; + indices.free; + ~buf = temp; + "silence removed".postln; +}); +) + +// make a dataset comprised of points that are ~winSize samples long +( +~winSize = 3000; // has to be somewhat long otherwise it can't represent lower frequencies +fork{ + var bufWin = Buffer(s); + var currentFrame = 0; + var counter = 0; + + ~ds = FluidDataSet(s); + + while({ + (currentFrame + ~winSize) < ~buf.numFrames; + },{ + FluidBufCompose.processBlocking(s,~buf,currentFrame,~winSize,destination:bufWin); + ~ds.addPoint(counter,bufWin); + "% / %".format(currentFrame.asString.padLeft(10),~buf.numFrames.asString.padLeft(10)).postln; + if((counter % 100) == 99){s.sync}; + counter = counter + 1; + currentFrame = currentFrame + ~winSize; + }); + bufWin.free; + "done".postln; +} +) + +// take a look at it +~ds.print; + +// use PCA to reduce the number of dimensions from the length of the wavetable to something smaller (this will take a second to process!) +( +~ds_pca = FluidDataSet(s); +~pca = FluidPCA(s,60); +~pca.fitTransform(~ds,~ds_pca,{ + arg variance; + variance.postln; +}); +) + +// take a look at it +~ds_pca.print; + +// train the autoencoder using the PCA-ed data ( -{ - var input = Saw.kr(2).linlin(-1,1,0,1); - var trig = Impulse.kr(ControlRate.ir/10); - var inputPoint = LocalBuf(1); - var outputPoint = LocalBuf(1); - BufWr.kr(input,inputPoint,0); - ~regressor.kr(trig,inputPoint,outputPoint,0,-1); - Poll.kr(trig,BufRd.kr(1,outputPoint,0),"mapped value"); -}.scope; +~nn_shape = [40,20,2,20,40]; +~ae = FluidMLPRegressor(s,~nn_shape,FluidMLPRegressor.relu,FluidMLPRegressor.identity,learnRate:0.1,maxIter:10,validation:0.2); +~ae.tapOut_(-1); +~continuous = true; +~train = { + ~ae.fit(~ds_pca,~ds_pca,{ + arg loss; + loss.postln; + if(~continuous,{~train.()}); + }); +}; +~train.(); ) +// tweak the learning rate +~ae.learnRate_(0.01); +~ae.learnRate_(0.001); +// turn off continuous training +~continuous = false; +// plot it +( +var ds_predict = FluidDataSet(s); +var ds_predict_norm = FluidDataSet(s); +var norm2D = FluidNormalize(s); +var buf2D_norm = Buffer.alloc(s,2); +var buf2D = Buffer.alloc(s,2); +var buf_pca_point = Buffer.alloc(s,~pca.numDimensions); +var buf_pca_point_norm = Buffer.alloc(s,~pca.numDimensions); +var wave = Buffer.alloc(s,~winSize); + +// get the latent space +~ae.tapIn_(0).tapOut_((~nn_shape.size+1)/2); // set the "output" of the neural network to be the latent space in the middle +~ae.predict(~ds_pca,ds_predict,{"prediction done".postln;}); // 2D latent space is now in the ds_predict dataset + +// normalize it for plotting +norm2D.fitTransform(ds_predict,ds_predict_norm); +norm2D.invert_(1); // this will need to invert points in the code below + +~ae.tapIn_((~nn_shape.size+1)/2); // now set the "input" of the neural network to be the latent space in the middle +~ae.tapOut_(-1); // set the output of the neural network to be the end +ds_predict_norm.dump({ + arg dict; + fork({ + var win = Window("Autoencoder Wavetable",Rect(0,0,1024,700)); + var ms = MultiSliderView(win,Rect(0,600,1024,100)); + ms.elasticMode_(true); + ms.reference_(0.5); + ms.drawRects_(false); + ms.drawLines_(true); + + // this is the synth that will play back the wavetable with an overlap of 4 + { + var n = 4; + var rate = 1; + var phs = Phasor.ar(0,rate,0,BufFrames.kr(wave)); + var phss = n.collect{ + arg i; + var p = phs + ((BufFrames.kr(wave) / n) * i); + p % (BufFrames.kr(wave)); + }; + var sig = BufRd.ar(1,wave,phss,1,4); + var env = EnvGen.ar(Env([0,1,0],[SampleDur.ir * ~winSize * 0.5].dup,\sin),phss > 0.5,timeScale:rate.reciprocal); + sig = sig * env; + Mix(sig).dup; + }.play; + + // the plotter accepts mouse clicks and drags + FluidPlotter(win,bounds:Rect((win.bounds.width-600) / 2,0,600,600),dict:dict,mouseMoveAction:{ + arg view, x, y; + buf2D_norm.setn(0,[x,y]); // put current position into this buffer + norm2D.transformPoint(buf2D_norm,buf2D); // transform the point back to the unnormalized dimensions (this has inverse set to 1) + ~ae.predictPoint(buf2D,buf_pca_point); // use that to make a prediction from the 2D latent space back to the PCA space + ~pca.inverseTransformPoint(buf_pca_point,wave); // transform the value in PCA space back to the length of the wavetable + wave.loadToFloatArray(action:{ // the wavetable is already on the server and playing, here we load it back just to display; + arg fa; + if(fa.size > 0,{ + defer{ms.value_(fa.resamp1(win.bounds.width).linlin(-1,1,0,1))}; + }); + }); + }); + + win.front; + },AppClock); +}); +) :: diff --git a/flucoma/doc/templates/schelp_controls.schelp b/flucoma/doc/templates/schelp_controls.schelp index 0676095..a27bdfa 100644 --- a/flucoma/doc/templates/schelp_controls.schelp +++ b/flucoma/doc/templates/schelp_controls.schelp @@ -1,16 +1,17 @@ + + + + + + + {# in SC the order of params is non-fixed ('attributes') followed by fixed ('arguments') #} {% for name, data in attributes.items() %} ARGUMENT:: {{ name }} {{ data['description'] | rst | indent(first=True) }} -{%- if 'enum' in data -%} - table::{% for enumname,desc in data['enum'].items() -%} - ## {{ enumname }} || {{ desc|rst }} -{%- endfor %} -:: -{%- endif -%} - +{% include "schelp_enum.schelp" %} {% endfor %} {% if species != 'buffer-proc' %}{# fixed controls are hidden for FluidBuf* #} diff --git a/flucoma/doc/templates/schelp_data.schelp b/flucoma/doc/templates/schelp_data.schelp index 1717e95..2e49bb7 100644 --- a/flucoma/doc/templates/schelp_data.schelp +++ b/flucoma/doc/templates/schelp_data.schelp @@ -12,6 +12,8 @@ ARGUMENT:: {{ name }} {{ data['description'] | rst | indent(first=True) }} +{% include "schelp_enum.schelp" %} + {% endfor %} {% for name, data in arguments.items() %} {% if name != 'name' %} {# we hide the 'name' arg for these objects in SC #} diff --git a/flucoma/doc/templates/schelp_enum.schelp b/flucoma/doc/templates/schelp_enum.schelp new file mode 100644 index 0000000..153e432 --- /dev/null +++ b/flucoma/doc/templates/schelp_enum.schelp @@ -0,0 +1,6 @@ +{%- if 'enum' in data -%} + table::{% for enumname,desc in data['enum'].items() -%} + ## {{ enumname }} || {{ desc|rst }} +{%- endfor %} +:: +{%- endif -%}