{"id":297,"date":"2019-03-31T17:11:55","date_gmt":"2019-03-31T12:41:55","guid":{"rendered":"https:\/\/mirblog.me\/?p=297"},"modified":"2019-04-02T19:16:19","modified_gmt":"2019-04-02T14:46:19","slug":"mlpack-a-c-machine-learning-library","status":"publish","type":"post","link":"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/","title":{"rendered":"mlpack: A C++ machine learning library"},"content":{"rendered":"<p style=\"text-align: justify;\">Nowadays, most people use <a href=\"https:\/\/scikit-learn.org\/stable\/\" target=\"_blank\" rel=\"noopener\">scikit-learn<\/a> for machine learning projects. Because scikit-learn is a top quality ML package for Python and lets\u00a0you use a machine learning algorithm in several lines of Python code, which is great!<\/p>\n<p style=\"text-align: justify;\">As a machine learning researcher, I personally like to try and use other machine learning libraries. It&#8217;s good to have knowledge\u00a0of other ML libraries in your arsenal. Since I used C++ for my projects, I decided to try a C++ machine learning library.<\/p>\n<p><!--more--><\/p>\n<p style=\"text-align: justify;\">Last year, I did a bit of research on the internet and found mlpack. <a href=\"https:\/\/www.mlpack.org\">mlpack<\/a> is fast and scalable machine learning library for C++ (Based on its definition on its website.). mlpack has the following features, which I think it worth the try.<\/p>\n<ul>\n<li>It is quite fast. (I will show an example next.)<\/li>\n<li>Its documentation is well written and has usage examples.<\/li>\n<li>It has bindings for Python.<\/li>\n<li>Comes with command-line programs that are ready to use. No need to write a single line of C++ for using some algorithms.<\/li>\n<\/ul>\n<p style=\"text-align: justify;\">In this post, I want to show usage examples which may help you use the mlpack library. I have also chosen the random forest classifier for the usage examples. First, I briefly explain how to install the mlpack on your system. Then usage examples are given for both the CLI program and C++ API. Finally, I made a comparison between mlpack and scikit-learn with random forest classifier.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_80 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<label for=\"ez-toc-cssicon-toggle-item-69dc29e260790\" class=\"ez-toc-cssicon-toggle-label\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/label><input type=\"checkbox\"  id=\"ez-toc-cssicon-toggle-item-69dc29e260790\"  aria-label=\"Toggle\" \/><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Installation_of_mlpack\" >Installation of mlpack<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Usage_example\" >Usage example<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Dataset\" >Dataset<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#CLI_example\" >CLI example<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#C_API_example\" >C++ API example<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Benchmark\" >Benchmark<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Test_Environment\" >Test Environment<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Results\" >Results<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/#Wrapping_up\" >Wrapping up<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Installation_of_mlpack\"><\/span>Installation of mlpack<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p style=\"text-align: justify;\">First of all, I suggest you use a Linux distribution for machine learning projects. This is not just my opinion but some other developers and researchers also agree on using Linux. Besides, the installation of mlpack on Linux systems is fairly straightforward.<\/p>\n<p style=\"text-align: justify;\">Probably the easiest way to install mlpack is to use the package manager of your Linux distro (I personally use Ubuntu most often.). However, it&#8217;s not recommended to install mlpack using the package manager. Because in my case, Ubuntu, it installs an out-dated version of mlpack, which is 2.2.5 at the time of writing this.<\/p>\n<p style=\"text-align: justify;\">Here I explain how to install mlpack from source and also install its dependencies. First, you have to install <a href=\"http:\/\/arma.sourceforge.net\/\">Armadillo<\/a> and <a href=\"https:\/\/www.boost.org\/\">Boost<\/a> libraries. For Ubuntu and Debian, mlpack&#8217;s dependencies\u00a0 can be installed through apt:<\/p>\n<pre class=\"lang:sh decode:true \">sudo apt-get install libboost-all-dev libarmadillo-dev<\/pre>\n<p>After the dependencies are successfully installed, you need to run following commands in terminal to install mlpack from source:<\/p>\n<pre class=\"lang:default decode:true\">wget http:\/\/www.mlpack.org\/files\/mlpack-3.0.4.tar.gz\r\ntar -xvzpf mlpack-3.0.4.tar.gz\r\nmkdir mlpack-3.0.4\/build &amp;&amp; cd mlpack-3.0.4\/build\r\ncmake ..\/\r\nmake -j4\r\nsudo make install<\/pre>\n<p>For more information about CMake configuration and etc, check out the installation tips in the\u00a0<a href=\"https:\/\/www.mlpack.org\/doc\/mlpack-3.0.4\/doxygen\/build.html\" target=\"_blank\" rel=\"noopener\">documentation<\/a> of mlpack.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Usage_example\"><\/span>Usage example<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p style=\"text-align: justify;\">mlpack has command-line programs for some ML algorithms and also C++ API. In this section, I give an example for both of them. random forest algorithm was chosen for this purpose.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Dataset\"><\/span>Dataset<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p style=\"text-align: justify;\">Similar to mlpack docs, <a href=\"https:\/\/archive.ics.uci.edu\/ml\/datasets\/covertype\">covertype<\/a> dataset will be used for usage examples. the dataset has 100K samples and 7 classes.\u00a0 In short, this dataset is about predicting forest cover types from cartographic variables.<\/p>\n<p>To download the dataset and unpack it, run the following commands:<\/p>\n<pre class=\"lang:sh decode:true\">mkdir dataset &amp;&amp; cd dataset\r\nwget https:\/\/github.com\/mir-am\/mirblog\/raw\/master\/posts\/mlpack-post-en\/dataset\/covertype-small.data.csv.gz\r\nwget https:\/\/github.com\/mir-am\/mirblog\/raw\/master\/posts\/mlpack-post-en\/dataset\/covertype-small.labels.csv.gz\r\ngunzip -k covertype-small.data.csv.gz covertype-small.labels.csv.gz<\/pre>\n<p>Make sure that you have downloaded the dataset. otherwise, you cannot follow the examples below.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"CLI_example\"><\/span>CLI example<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p style=\"text-align: justify;\">One of the interesting features of mlpack is that it has command-line programs that help you split the dataset and use a machine learning algorithm without writing C++ code.<\/p>\n<p style=\"text-align: justify;\">In this example, first, we split the dataset into train and tests sets, which contains 70% and 30% of samples, respectively. To do so, run the following commands:<\/p>\n<pre class=\"lang:sh decode:true \">mkdir train test\r\nmlpack_preprocess_split -i covertype-small.data.csv \\\r\n-I covertype-small.labels.csv \\\r\n-t train\/covertype-small.train.csv \\\r\n-l train\/covertype-small.train.labels.csv \\\r\n-T test\/covertype-small.test.csv \\\r\n-L test\/covertype-small.test.labels.csv \\\r\n-r 0.3 -v<\/pre>\n<p>Next, we train a random forest classifier with the following command:<\/p>\n<pre class=\"lang:sh decode:true \">mlpack_random_forest \\\r\n-t train\/covertype-small.train.csv \\\r\n-l train\/covertype-small.train.labels.csv \\\r\n-N 10 \\\r\n-n 3 \\\r\n-a -M rf-model.bin -v<\/pre>\n<p style=\"text-align: justify;\">I ran the &#8220;mlpack_random_forst&#8221; command on my system and training accuracy was <strong>95.87<\/strong> percent. Note that a random forest model is saved as well. Hence we can predict the test samples using the saved model. Now, run the following commands to predict the labels of the test samples:<\/p>\n<pre class=\"lang:sh decode:true \">mlpack_random_forest \\\r\n-m rf-model.bin \\\r\n-T test\/covertype-small.test.csv \\\r\n-L test\/covertype-small.test.labels.csv \\\r\n-p predictions.csv -v<\/pre>\n<p style=\"text-align: justify;\">After running the above command, I achieved an accuracy of <strong>84.4<\/strong> percent on test samples, which is lower than training accuracy as expected. If you have done this example,\u00a0 You might probably notice that mlpack is quite fast as it is advertised. On my Ubuntu system, I trained an RF model on 70K samples in about 10 seconds, which is pretty good. Moreover, I created a bash script <a href=\"https:\/\/github.com\/mir-am\/mirblog\/tree\/master\/posts\/mlpack-post-en\" target=\"_blank\" rel=\"noopener\">here<\/a> so that you can see the whole CLI example in one place.<\/p>\n<p style=\"text-align: justify;\">I used random forest classifier as an example. However, mlpack has also other command-line programs for various ML algorithms. For more information, check out mlpack&#8217;s\u00a0<a href=\"https:\/\/www.mlpack.org\/doc\/mlpack-3.0.4\/cli_documentation.html#random_forest_detailed-documentation\" target=\"_blank\" rel=\"noopener\">documentation<\/a>. Next, I explain how to use mlpack C++ API for training a random forest model on the covertype dataset.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"C_API_example\"><\/span>C++ API example<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p style=\"text-align: justify;\">In this section, I assume that you have basic knowledge of C++ language so that you can follow the below example. First, create a cpp file named &#8220;rf-mlpack.cpp&#8221; with the following command:<\/p>\n<pre class=\"lang:sh decode:true \">touch rf-mlpack.cpp<\/pre>\n<p>As you might have guessed, the first step is to add mlpack headers files at the beginning of the C++ code as follows:<\/p>\n<pre class=\"lang:c++ decode:true \">#include &lt;iostream&gt;\r\n#include &lt;mlpack\/core.hpp&gt;\r\n#include &lt;mlpack\/core\/util\/cli.hpp&gt;\r\n#include &lt;mlpack\/core\/data\/split_data.hpp&gt;\r\n#include &lt;mlpack\/methods\/random_forest\/random_forest.hpp&gt;\r\n\r\n#define BINDING_TYPE BINDING_TYPE_CLI\r\n#include &lt;mlpack\/core\/util\/mlpack_main.hpp&gt;<\/pre>\n<p style=\"text-align: justify;\">Above header files are needed to spit the dataset and also train a Random Forest Model. Next, we define the main function, which is a little bit different from a typical C++ program. mlpack has its own main function which accepts command line arguments. However, you are not forced to use this.<\/p>\n<pre class=\"lang:c++ decode:true \">using namespace std;\r\nusing namespace mlpack;\r\nusing namespace mlpack::util;\r\nusing namespace mlpack::tree;\r\nvoid mlpackMain()\r\n{\r\n\/\/ Write following C++ code example here.\r\n}<\/pre>\n<p>The first step is to load the dataset into an Armadillo matrix. Make sure that you use the correct path to the dataset.<\/p>\n<pre class=\"lang:c++ decode:true \">arma::mat samples;\r\narma::Row&lt;size_t&gt; labels;\r\n\r\ndata::Load(\".\/dataset\/covertype-small.data.csv\",\r\nsamples, true);\r\ndata::Load(\".\/dataset\/covertype-small.labels.csv\",\r\nlabels);<\/pre>\n<p>Similar to CLI example, we split the dataset into train and test sets in C++:<\/p>\n<pre class=\"lang:c++ decode:true \">arma::mat trainData;\r\narma::mat testData;\r\narma::Row&lt;size_t&gt; trainLabel;\r\narma::Row&lt;size_t&gt; testLabel;\r\n\r\ndata::Split(samples, labels, trainData, testData,\r\ntrainLabel, testLabel, 0.3);<\/pre>\n<p style=\"text-align: justify;\">Now, it&#8217;s time to create an instance of random forest classifier in mlpack API.<\/p>\n<pre class=\"lang:c++ decode:true \">const size_t numClasses = arma::max(labels);\r\nconst size_t numTree = 10;\r\nconst size_t minLeafSize = 3;\r\n\r\nRandomForest&lt;&gt;* rfModel = new RandomForest&lt;&gt;();<\/pre>\n<p>After initializing an instance of RF classifier, it can be trained with the dataset and specified hyper-parameters as follows:<\/p>\n<pre class=\"lang:c++ decode:true \">rfModel-&gt;Train(trainData, trainLabel, numClasses,\r\nnumTree, minLeafSize);<\/pre>\n<p>Next, we predict the labels of the test samples with the following code:<\/p>\n<pre class=\"lang:c++ decode:true \">arma::Row&lt;size_t&gt; pred;\r\nrfModel-&gt;Classify(testData, pred);<\/pre>\n<p>The last step is to compute the accuracy of the trained RF classifier.<\/p>\n<pre class=\"lang:c++ decode:true \">const size_t correct = arma::accu(pred == testLabel);\r\ncout &lt;&lt; \"Accuracy on test samples: \" &lt;&lt;\r\n double(correct) \/ double(pred.n_elem) * 100 &lt;&lt; endl;<\/pre>\n<p>And also don&#8217;t forget to free up the memory when you&#8217;re done.<\/p>\n<pre class=\"lang:c++ decode:true \">delete rfModel;<\/pre>\n<p>After you wrote the entire example in the &#8220;rf-mlpack.cpp&#8221; file, you can compile the file using the following command:<\/p>\n<pre class=\"lang:sh decode:true \">g++ .\/rf-mlpack.cpp -o rf-mlpack -O2 -std=c++11 -fopenmp\r\n-larmadillo -lmlpack -lopenblas -lboost_serialization\r\n-lboost_program_options<\/pre>\n<p>Notes on compilation:<\/p>\n<ul>\n<li style=\"text-align: justify;\">-fopenmp flag is needed. Because mlpack uses <a href=\"https:\/\/www.openmp.org\/\" target=\"_blank\" rel=\"noopener\">OpenMP<\/a> library to speed up algorithms on multi-core CPUs.<\/li>\n<li style=\"text-align: justify;\">C++11 flag &#8220;-std:c++11&#8221; as stated in documentation of mlpack.<\/li>\n<li style=\"text-align: justify;\">Moreover, you need to link the program with the libraries, Armadillo, mlpack, OpenBLAS and Boost. Note that I installed Armadillo with <a href=\"https:\/\/www.openblas.net\/\" target=\"_blank\" rel=\"noopener\">OpenBLAS<\/a> library. You may remove &#8220;-lopenblas&#8221; if you haven&#8217;t installed this library.<\/li>\n<\/ul>\n<p>Finally, run the generated executable to see the results:<\/p>\n<pre class=\"lang:sh decode:true \">.\/rf-mlpack -v<\/pre>\n<p style=\"text-align: justify;\">On my Ubuntu system, I achieved an accuracy of <strong>82.24<\/strong> percent on test samples. Also, the execution time was about 10 seconds.<\/p>\n<p style=\"text-align: justify;\">As you&#8217;ve seen in this example, mlpack&#8217;s C++ interface is somewhat clean and simple. We trained and tested a classifier in several lines of code.\u00a0 In the next section, I compare the scikit-learn&#8217;s random forest with that of mlpack in terms of speed.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Benchmark\"><\/span>Benchmark<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p style=\"text-align: justify;\">I used scikit-learn quite often for my projects as it is probably the most popular ML library out there. Thus I decided to compare scikit-learn&#8217;s random forest classifier with that of mlpack in terms of computational time. However, bear in mind that this is NOT an extensive comparison between the two libraries. Moreover, I do NOT say one library is better than the other.\u00a0 I just wanted to conduct an experiment for my curiosity.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Test_Environment\"><\/span>Test Environment<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p style=\"text-align: justify;\">Before presenting the results of the benchmark, I should briefly demonstrate my system spec, which is shown in the below table:<\/p>\n\n<table id=\"tablepress-2\" class=\"tablepress tablepress-id-2\">\n<tbody class=\"row-striping row-hover\">\n<tr class=\"row-1\">\n\t<td class=\"column-1\">CPU<\/td><td class=\"column-2\">AMD Ryzen 7 1800X @ 3.6 GHz<\/td>\n<\/tr>\n<tr class=\"row-2\">\n\t<td class=\"column-1\">RAM<\/td><td class=\"column-2\">16 GB @ 2.4 GHz<\/td>\n<\/tr>\n<tr class=\"row-3\">\n\t<td class=\"column-1\">Storage<\/td><td class=\"column-2\">Samsung EVO 750 SSD 250 GB<\/td>\n<\/tr>\n<tr class=\"row-4\">\n\t<td class=\"column-1\">OS<\/td><td class=\"column-2\">Ubuntu 18.04.1 LTS<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<!-- #tablepress-2 from cache -->\n<p>It should be noted that the above usage examples are run on this system.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Results\"><\/span>Results<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p style=\"text-align: justify;\">I ran the random forest classifier 5 times in both libraries. The mean of 5 trials are shown in the below figure:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium\" src=\"https:\/\/raw.githubusercontent.com\/mir-am\/mirblog\/master\/posts\/mlpack-post-en\/img\/scikit-learn-vs-mlpack-random-forest.png\" alt=\"scikit-learn vs. mlpack - training time of random forest\" width=\"1095\" height=\"622\" \/><\/p>\n<p style=\"text-align: justify;\">scikit-learns&#8217;s random forest classifier is <strong>10 times faster<\/strong> than that of mlpack, which is quite odd! I had actually expected mlpack to be faster. Because it is implemented in C++ and uses Armadillo and OpenMP. As a result, mlpack uses several cores of a CPU when one trains an RF classifier.\u00a0 However, scikit-learn&#8217;s RF classifier runs on one core! Moreover, the hyper-parameters in this experiment were the same for both scikit-learn and mlpack. I should do a bit of research soon to find out why the benchmark results are drastically different. I guess that this is probably due to the difference in algorithm or implementation.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Wrapping_up\"><\/span>Wrapping up<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p style=\"text-align: justify;\">So far we have looked at the mlpack library.\u00a0 Usage example for both CLI programs and C++ API were given. As you&#8217;ve seen, mlpack&#8217;s useful\u00a0 CLI programs help you run ML algorithms quickly without writing code. Moreover, mlpack provides somewhat easy-to-use C++ API that helps you train and test a classifier in several lines of C++ code. (If you are a C++ programmer, I suggest you explore mlpack more than I did in this post.) I also made a limited comparison between the scikit-learn and mlpack in terms of speed using random classifier. However, benchmark results need further investigation as it was unexpected and odd!<\/p>\n<p style=\"text-align: justify;\">In the end, I should mention that both the usage examples including the benchmark code can be downloaded on GitHub <a href=\"https:\/\/github.com\/mir-am\/mirblog\/tree\/master\/posts\/mlpack-post-en\" target=\"_blank\" rel=\"noopener\">here<\/a>.\u00a0\u00a0If you have questions or problems, let me know by leaving a comment below. Moreover, I&#8217;d like to know your opinion on the benchmark I did above.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nowadays, most people use scikit-learn for machine learning projects. Because scikit-learn is a top quality ML package for Python and lets\u00a0you use a machine learning algorithm in several lines of Python code, which is great! As a machine learning researcher, I personally like to try and use other machine learning libraries. It&#8217;s good to have &hellip; <a href=\"https:\/\/mirblog.net\/index.php\/2019\/03\/31\/mlpack-a-c-machine-learning-library\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;mlpack: A C++ machine learning library&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[46,44,25,47,50,41,42,43,49,40,23,24,45,13,48],"class_list":["post-297","post","type-post","status-publish","format-standard","hentry","category-machine-learning","tag-api","tag-benchmark","tag-c","tag-cli","tag-covertype","tag-cpp","tag-examples","tag-installation","tag-linux","tag-mlpack","tag-program","tag-python","tag-random-forest","tag-scikit-learn","tag-ubuntu"],"_links":{"self":[{"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/posts\/297","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/comments?post=297"}],"version-history":[{"count":57,"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/posts\/297\/revisions"}],"predecessor-version":[{"id":451,"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/posts\/297\/revisions\/451"}],"wp:attachment":[{"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/media?parent=297"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/categories?post=297"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mirblog.net\/index.php\/wp-json\/wp\/v2\/tags?post=297"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}