Open in new window / Try shogun cloud
--- Log opened Wed May 25 00:00:10 2016
-!- besser82 [~besser82@fedora/besser82] has quit [Ping timeout: 250 seconds]00:09
-!- sanuj [~sanuj@] has joined #shogun08:05
-!- sanuj [~sanuj@] has quit [Ping timeout: 244 seconds]09:00
-!- sanuj [~sanuj@] has joined #shogun09:24
-!- Saurabh7 [Saurabh7@gateway/shell/panicbnc/x-qkwwseokbysimqpf] has quit [Ping timeout: 260 seconds]09:59
@wikingwhere are all the gsoc slaves? :D10:16
sanujwiking, hello10:17
@wikinghow's the tagging going?10:18
sanuji fixed a cookbook bug yesterday10:19
sanujsending pr for that10:19
@wikinghave you encountered any problems with the PR bot?10:19
sanuji need to make shogun support new tags and parameter10:19
sanujmerge it in develop10:19
@wikingi would do that first in a feature branch10:20
sanujyes feature branch of course10:20
sanujthen merge it10:20
@wikingdo you have a feature branch for that?10:20
sanujthat's first priority as OXPHOS needs to do serialization for that10:20
@wikingdont see one10:21
@wikingok i'll start a feature branch10:21
sanujyes :)10:21
@wikingso you can pr to that one10:21
@wikingthere it is10:23
sanujlisitsyn, are you in office?10:25
lisitsynnot yet10:25
sanujjust asking10:25
lisitsynI see :)10:25
sanujwhen does it snow in moscow10:26
lisitsynsanuj: deffs not now :D10:26
lisitsynsanuj: well a few times in october usually10:26
sanujit's 40 degree C here10:26
sanujhot hot hot10:26
lisitsynthen it is all covered with snow in mid of november10:26
lisitsynand finally in early april it is gone10:27
@wikingSAMARA FOR LIFE :)10:27
lisitsynit is unusually cold now - like 16C10:27
@wikinglisitsyn: ^ :>10:27
lisitsynwiking: well, moscow :D10:27
@wikingbut you are not from mosco10:27
lisitsynnor from samar10:27
sanujthere are many good sport coders from russia10:27
@wikinglisitsyn: loool?10:27
@wikingwhere then?10:27
sanujlisitsyn, are you also one of them? ;)10:28
lisitsynsanuj: nah10:28
@wikingoblast smara!10:28
lisitsynsanuj: don't like it at all10:28
@wikingthere you go10:28
lisitsynwiking: hah yes10:28
sanujlisitsyn, we need to do it here for job interviews :P10:28
lisitsynsanuj: sure10:28
lisitsynsanuj: actually not for me I think10:29
lisitsynI mean10:29
sanujeven i don't do it on my own10:29
lisitsynif I was to change my job it would be quite stupid to ask me to rotate a tree10:29
lisitsynas I am leading a team of 6 now10:29
@wikinglisitsyn: hahahah10:30
@wikinglisitsyn: they still do ask me that :)10:30
lisitsynwiking: yeah totally stupid10:30
@wikinglast time i told the guy10:30
@wikinglook man10:30
@wikingi worked with this shit for the last 4 years writing my phd thesis10:31
@wikingi think i know what i was doing10:31
@wikingthey took my answer personally :P10:31
lisitsynwiking: well I don't mind if they ask puzzles10:32
lisitsynwiking: but not like in that ACM10:33
@wikingah no it wans't a puzzle10:33
@wikingit was like10:33
@wikingwhat is an avl-tree10:33
@wikingand how it works10:33
lisitsynhaha ok10:34
@wikingor same for kd-tree10:34
@wikingand it was clear for me10:34
lisitsynwell it is not that bad10:34
@wikingthat they guy who asked me the question10:34
lisitsynit would be terrible if they asked you this type of problem10:34
@wikinghad no clue what is a kd-tree10:34
lisitsynnah I don't know10:34
@wikingbecause he used only psql's function to find the nearest points10:34
@wikingso i was a bit asshole when i answered obviously10:34
@wikingbut then i realised that i really dont even wanna work with them :)10:35
sanujwiking, you have 2 phds!?10:36
sanuji saw your linkedin :P10:37
lisitsynoh I have a story about linkedin10:37
lisitsynI've deleted mine10:37
@wikingsanuj: ahahhah no10:37
sanujthe less stories the better10:37
@wikingsanuj: i've just started my phd somewhere10:37
@wikingand finsihed somewhere else10:38
@wikinglisitsyn: man i wanted to do that10:38
@wikinghow did you do it?!10:38
sanujwiking,  so you will be having 2 phds after a few years10:38
lisitsynwiking: well clicked somewhere in help10:38
@wikingsanuj: hahah i'll not finish taht first one :(10:39
@wikingand one is more than enough for me trust me10:39
sanujfor anyone i guess10:39
sanujwiking, you work with medical images?10:40
@wikingsanuj: used to10:41
@wikingas my research yes10:41
sanujwiking, lisitsyn i think both of you have blogs10:42
lisitsynyeah but I fail to continue that10:43
lisitsynI've found some will to post a few things10:43
sanujlisitsyn, why do you blog10:43
sanuji need inspiration to do that10:43
@wikinglisitsyn: i'll start pinging you on my blog10:43
@wikingand maybe you'll start answering10:43
@wikingi think the last time i wrote a blog was ages ago10:43
@wikingarianepaola: hey10:44
@wikingso we need to setup benchmarking bots10:44
lisitsynsanuj: well it helps to write things10:47
lisitsynlike a defrag for your brain10:48
shogun-buildbotbuild #2873 of bsd1 - libshogun is complete: Failure [failed configure]  Build details are at  blamelist: Viktor Gal <>10:55
lisitsynwiking: that's gonna help the humanity10:56
lisitsyntwo useful blogs10:56
-!- besser82 [~besser82@fedora/besser82] has joined #shogun11:11
-!- mode/#shogun [+o besser82] by ChanServ11:11
-!- Saurabh7 [~Saurabh7@] has joined #shogun11:13
sanujtime for food11:15
-!- sanuj [~sanuj@] has quit [Ping timeout: 276 seconds]11:22
shogun-buildbotbuild #11 of xenial - libshogun is complete: Failure [failed test]  Build details are at  blamelist: Viktor Gal <>11:23
-!- Saurabh7- [Saurabh7@gateway/shell/panicbnc/x-tkhunxevfgkzzhnq] has joined #shogun11:25
-!- leagoetz [] has joined #shogun11:40
-!- travis-ci [] has joined #shogun11:44
travis-ciit's Viktor Gal's turn to pay the next round of drinks for the massacre he caused in shogun-toolbox/shogun:
-!- travis-ci [] has left #shogun []11:44
-!- leagoetz [] has quit []11:57
-!- lambday [8028b10a@gateway/web/freenode/ip.] has joined #shogun12:04
-!- mode/#shogun [+o lambday] by ChanServ12:04
-!- sonne|osx [] has joined #shogun12:11
-!- sanuj [~sanuj@] has joined #shogun12:24
-!- sonne|osx [] has quit [Quit: sonne|osx]12:24
-!- besser82 [~besser82@fedora/besser82] has quit [Ping timeout: 264 seconds]12:30
-!- sonne|osx [] has joined #shogun12:34
-!- Saurabh7 [~Saurabh7@] has quit [Quit: Leaving]12:44
-!- HeikoS [] has joined #shogun13:39
-!- mode/#shogun [+o HeikoS] by ChanServ13:39
sanujHeikoS, there?14:11
sanujlisitsyn, there?14:11
lisitsynsanuj: yes14:12
sanujInt is working now in cookbook14:12
sanujbut Int num_components = 314:12
sanujis translated to14:12
sanujdouble num_components = 3;14:12
sanujwhich gives error14:12
sanujin java14:12
sanujlisitsyn, i checked java.json14:13
sanujlooks fine14:13
-!- sanuj [~sanuj@] has quit [Remote host closed the connection]14:22
rcurtinHeikoS: you were looking for me yesterday14:27
@wikingrcurtin: do you guys maybe have14:30
@wikinga server to share for the benchmark project ?:)14:30
@wikingrcurtin: currenlty i'm planning to put openstack on our machines14:32
@wikingand setup benchmarking like that14:32
@wiking(and yeah aws instances are super slow :P)14:33
rcurtinwiking: I have one machine that we have used in the past14:33
rcurtinis that enough for your needs?14:33
rcurtinit's not incredibly powerful or fast, just a desktop with an i5 and 16GB RAM14:34
@wikingrcurtin: yeah i mean if you would like to contribute :P14:34
@wikingbut i dont know if that actually would work14:34
@wikinghaving 3 different machines14:34
@wikingbut then again14:35
@wikingif the (dataset+model) from different libs are running on the same machine14:35
@wikingit should be fine14:35
rcurtinI dunno what you mean, do you mean you want me to give the system to the shogun project?14:35
@wikingi meant that we setup an openstack14:35
@wikingand run benchmarks on that openstack14:35
rcurtinhang on, I have to step out14:36
@wikingno worries14:36
rcurtinlet me get back to you later14:36
@wikingthis was just an idea14:36
@wikingstill wondering about stuff14:36
@wikinghow to do the benchmarking14:36
rcurtinyeah I just need to do a little reading14:36
rcurtinI will get back to you :)14:37
@wikingbecause now we more or less moved14:37
@wikingour stuff on aws14:37
@wikingso we've got our machines freed up for benchmarks14:38
@wikingarianepaola: pong pong pooooooooooooong14:40
@wikingSaurabh7-: yoyoyo14:40
-!- Saurabh7 [~Saurabh7@] has joined #shogun14:42
-!- sanuj [~sanuj@] has joined #shogun14:47
@wikingSaurabh7: ping?14:48
Saurabh7wiking, hi !14:49
@wikingSaurabh7: how's thing going with the thread safetly14:50
@wikingdo you need any hadn help?14:50
Saurabh7wiking, I am trying to use the features copy constructors for now14:51
Saurabh7for DenseFeatures it looks good but i dont knwo how to generalize14:51
@wikingcopy ctr?14:51
@wikingfor what?14:52
@wikingi mean how?14:52
@wikingif you can share a code14:52
@wikingi can help14:52
@wikingany code14:52
@wikingjust to understand14:54
@wikingwhat happens with the code14:54
Saurabh7CDenseFeatures feat =features14:54
Saurabh7something like this14:54
Saurabh7when we want to add multiple subsets14:55
@wikingso feat->add_subset() would return a CDenseFeatures14:55
Saurabh7no it just adds subset to subsetstack14:55
Saurabh7feat is copy of features14:56
lisitsynmake it immutable!!!14:56
@lambdaySaurabh7: may I ask what are you working on regarding features?14:56
@lambdaySaurabh7: you want to create a shallow copy?14:56
Saurabh7lambday, yes14:56
@wikingSaurabh7: yeah as lisitsyn said and we said yesterday14:57
@wikingso just const14:57
@wikingSaurabh7: you are about to eliminate CSubsetStack14:57
Saurabh7wiking, no14:57
@lambdaySaurabh7: so for dense features, you want that the underlying feature matrix is the same, but the subsets are copied only?14:57
Saurabh7just create a copy and add another subset right ?14:57
Saurabh7lambday, yes14:58
Saurabh7lambday, and adding different new ones14:58
@lambdayI added a similar thing here..14:58
@wikinglambday: why did you do this ?14:58
@lambdaySaurabh7: maybe this can be of any use?14:59
@wikingi mean why do we have FeaturesUtil14:59
@wikingshould have been part of CFeatures14:59
@lambdaywiking: I needed this to create blocks of features (for testing things blockwise) with just different subsets on14:59
@lambdayalthough the underlying matrix is same14:59
Saurabh7lambday, hmm yes14:59
@wikingbut why not part of CFeatures?14:59
@wikingas a simple static function of it? :)14:59
Saurabh7is that only or denseFeatures ?14:59
@lambdaywiking: well, I could add it there.. but I just temporarily added it only for dense14:59
@wikinglambday: yeah as we learned15:00
@wikingif you put a code temporary15:00
@wikingit'll be permanent :D15:00
@lambdaywiking: would something like this be useful for CFeatures in general?15:00
@lambdaynot sure!15:00
@wikingit's part of CFeatures15:00
@wikingso why not15:00
@lambdaywiking: alright then..15:01
@wikingi mean if it's only working for15:01
@wikingjust add it there15:01
@lambdayalright I'll do that15:01
@lambdaywiking: would we like to have it in develop right now?15:02
@lambdaycause I'm working on this feature branch.. and it's all there for now15:02
@wikinglambday: it will not break anything or?15:02
@lambdaywiking: hopefully not..15:02
@wikingif this is not yet in develop15:02
@wikingthen it's fine15:02
@wikingjust move it in your feature branch15:02
@lambdaywiking: so, in CFeatures, we provide this API.. and keep it SGNOTIMPLEMENTED.. and only implement it for DenseFeatures for now..15:03
Saurabh7lambday, isnt it similar to this
@lambdaySaurabh7: I think the other one shares the same subsetstack object as well..15:04
@lambdaySaurabh7: not sure though... can you confirm?15:04
@wikinglambday: yeah sounds good15:05
@wikingi hate SGNOTIMPLEMENTED placeholders :P15:05
@wikingbut yeah i know it would take an effort now15:05
@lambdaywiking: hehe but we can't just run away from it15:05
@wikingto get that implemented everywhere15:05
@wikinglambday: = 015:05
@wikingand then just implement it every derived class15:05
@wikingSaurabh7: can help in that15:06
@wikingi mean15:06
@wikingif yo udont mind15:06
@wikinghe could copy this15:06
@wikingin his branch15:06
@lambdaywiking: yeah but does subset make sense for all the feature subclasses? not sure :/15:06
@wikingand then you'll just rebase and cherrypick his branch for yours15:06
@wikinglambday: imho it shouldn't matter of the type15:06
@wikingi mean why not?15:06
@wikingany feature set should be slicable15:06
@wikingin any ways15:06
@wikingHeikoS: standup: 0/1 ?15:07
@lambdaywiking: alright.. this needs some investigation then :D15:07
@wikinglambday: which part?15:07
@HeikoSwiking: I like it, would keep it to the mentor student pair though, otherwise it is hard to track15:07
@HeikoSso "getting in touch daily" +115:07
@wikingHeikoS: doesn't need to be sync15:07
@wikingit can be async15:07
@lambdaywiking: to check if all the features are subset-able15:07
@wikingthat's why i said just to dump here the stuff15:07
@HeikoSwiking: good then15:07
@HeikoSwe only have 415:07
@wikingand we can look at the logs15:07
@HeikoSso easy to keep track15:07
@wikinglambday: well make it15:07
@HeikoSsanuj, Saurabh7 Saurabh7-  jojo15:08
@wikingi mean this is comptures15:08
@wikingyou can do any fucking thing you want15:08
@HeikoSOXPHOS: jo15:08
@HeikoSdiscussing linalg?15:08
@wikinglambday: just add a pure virtual15:08
sanujHeikoS, i have a question15:08
@wikingand then worst case where it's not supported15:08
@wikingyou add a not supported15:08
@lambdaywiking: yeah.. gonna take some time though.. but I'll do that15:08
@HeikoSsanuj: shoot15:08
@wikinggive it to Saurabh715:08
sanujHeikoS, Int is working in cookbook now15:08
sanujHeikoS, but for Java15:09
@wikinglambday: i mean he needs this15:09
@lambdaywiking: as in, I want to make it work for those it is supposed to work15:09
@wikinglambday: but why dont you pass this to Saurabh7 ?15:09
@lambdaywiking: oh.. then let me start with this15:09
@lambdayumm yeah that's also an idea15:09
@wikinghe'll make good use of it15:09
sanujHeikoS, Int component_num = 115:09
@wikingand he can finetune it for you15:09
sanujHeikoS, is converted to15:09
@lambdaywiking: that sounds cool!15:09
@HeikoSsanuj: I saw, this is defined throught the json file15:09
sanujHeikoS, json is fine15:10
@lambdaySaurabh7: ^15:10
sanujHeikoS, json has "Int": "int",15:10
@lambdaySaurabh7: can you confirm that the copy ctor actually uses the same underlying stack?15:10
@HeikoSsanuj: but the json defines the text to text replacement15:10
@HeikoSif it translates to double, then there is a mistake15:10
@HeikoScant really be15:10
Saurabh7lambday, it should be same right? lemme check15:11
@lambdaySaurabh7: I meant same object..15:11
@lambdayI want to add different subsets to each of these shallow copies15:11
@lambdaywithout affecting the other15:11
@HeikoSsanuj: lets talk private15:11
@lambdayand still want to share the same underlying feature matrix15:12
Saurabh7it createsa copy with same subsets15:12
Saurabh7so i think it should be good to add new ones onto it15:13
@lambdaySaurabh7: doesn't it add the same CSubset* objects?15:14
@lambdaySaurabh7: not sure whether that would be a problem15:14
@lambdaySaurabh7: don't worry about this.. if the copy ctor serves your purpose.. go ahead with it15:15
@lambdaySaurabh7: I'll check once again if it serves my purpose..15:16
@lambdayif not, maybe we can add the shallow copy thing .. I'll create an issue15:16
Saurabh7lambday,ok i will try to check  with yours too whats the difference15:16
@lambdaySaurabh7: cool! let me know15:16
@wikinganybody has idea how we should patch cmake so that it realises that omp is available in clang 3.8?15:17
@wikingwe eliminated the use of lp solve? :)15:25
@wikingHeikoS: that pr15:26
@wikingis no goodie15:26
@wikingnot every int15:26
@wikingis a int32_t15:26
@wikingi mean yes on your machine probable15:26
@wikingbut shogun runs on other things than x8615:26
@HeikoSwiking: I see15:28
@HeikoSthe point is to have it defined somewhere15:29
@HeikoSfor meta examples15:29
@HeikoSit needs to be visible in the ctags15:29
@HeikoSany idea how to get that otherwise?15:29
@HeikoSlisitsyn:  ^15:29
@wikingwell it's in  stdint.h15:30
@wikingwhich is included in common.h15:30
@wikingmmm HeikoS we really dont use lpsolve anymore15:31
@wikingi'll remove the dependency check ok?15:31
@HeikoSplease do15:31
@wikingi did a git grep USE_LPSOLVE15:31
@wikingand it's only defined15:31
@wikingin config.h.in15:31
@wikingbut nowhere is being used15:31
@HeikoSprobably forgotting to be guarded15:31
@HeikoSmaybe grep for something where it might be used15:32
@wikingmmm tried differnt lp functions15:33
@wikingbut if guarding would be a problem15:34
@wikingthen it would have errored15:34
@wikingon my machine15:34
@wikingas it didn't detect lpsolve15:34
@wikingon my machine15:34
@wikingstill it managed to compile everyting15:34
@wikingso i think we removed lp solve dependency15:36
@wikingHeikoS: you reckon we wanna have that back ever?15:37
@wikingthey just released a new version15:38
@wikingafter 5 years15:38
rcurtinwiking: so I am still not sure what you mean15:39
rcurtinit is definitely possible to run the benchmark system on an openstack machine15:39
rcurtinbut I am not sure how we could be helpful with that15:40
rcurtinI don't have any openstack setup15:40
rcurtinall I have are a handful of identical desktops we do benchmarking on15:40
@wikingok so what i thought15:40
@wikingthat we join our mahcine forces15:40
@wikingusing openstack15:40
rcurtinanother consideration is that openstack will give you Vms15:40
rcurtinso timings may have high variance15:41
@wikingyeah but it's kvm15:41
@wikingso it's barebone15:41
@wikingthe only difference could be15:41
@wikingif you run for example15:41
@wikingkmeans on one machine15:41
rcurtin(on the train and on a phone, slow responses)15:41
@wikingscikit-learn kmeans on one machine15:41
@wikingand then mlpack kmeans on another machine15:41
@wikingbut if it's running on the same openstack instance15:42
-!- Saurabh7 [~Saurabh7@] has quit [Ping timeout: 240 seconds]15:43
@wikingso if we run 1 algo but all it's different versions (mlpack, shogun, scikit-learn) in the same openstack instance15:43
@wikingthen the timing will be the same15:43
@wikingi mean it will not vary15:43
@wikingand this is the shitty part15:44
@wikingthat we are not only interested in relative diff15:44
@wikingbut absolute diff15:44
rcurtinbut still this is running on a VM and so if any other VMs are running on the same hardware you get slowdown15:45
@wikingrcurtin: openstack15:45
@wikingi mean it will not give you more machines15:45
@wikingthan it is avaialble on that node15:45
@wikingi.e there will not be any race condition for cpu/memory15:45
rcurtinyes but what I am saying is when you have more than one Vm on a physical host your timings are no longer accurate15:46
rcurtinif you are saying that openstack is one Vm per physical host15:46
rcurtinthen this is different15:46
@wikingbut why would it be the timing different15:46
rcurtinI am not intricately familiar with openstack15:46
@wikingi mean if you take15:46
@wikingn instances on the machine15:46
arianepaolausing openstack will just add more complexity15:46
@wikingthat fits into the machine15:47
@wikingthey will not cause timing problems15:47
arianepaolayou can even run docker in openstack on top of vagrant....15:47
@wikingarianepaola: but i just want to have a way to be able to share resources15:47
@wikingfor that we need some sort of resource managment15:47
rcurtinwiking: if you can prove tht variability of runs is not larger than with a single non Vm I will believe you15:47
arianepaolaevery layer brings down performance15:47
@wikingto know where resource i available if any15:47
rcurtinbut at the moment I do not15:48
rcurtinI think that a better way to collaborate15:48
@wikingrcurtin: this is not a belief15:48
@wikingi mean this is being used in many places :D15:48
rcurtinis to have shared setup scripts or something like this15:48
@wikinglike rackspace15:48
@wikingbut then again15:48
@wikingit was just an idea15:48
@wikingif you dont believe in it it's fine15:48
@wikingi'm not going to spend time to prove something that is know to be working15:48
rcurtinwiking: that does not mean variance in timing runs is not higher than with a physical host :)15:49
@wikingdo you know what's kvm?15:49
@wikingor xen for that matter?15:49
@wikingeither of them15:49
@wikingwill make sure that you are actually running the actual instruction15:49
@wikingon your cpu15:49
@wikingit's not vm15:49
rcurtinbut you still have limited disk Io15:50
@wikingand we care about disk io perfomance in case of kmeans?15:50
rcurtinif you are benchmarking data load time also then yes15:50
rcurtinand that is currently what the benchmark scripts do15:51
@wikingok so you cannot separate data loading time15:51
@wikingfrom the actual modeling method?15:51
@wikingbecause in that case i suppose the best way to beat the benchmarking is15:51
rcurtinah I need to think about that15:51
rcurtinthere are ways but they can be complex15:51
@wikingthat you write a fully threadable read IO15:51
rcurtinso I need to revisit the code and get back to ou15:51
@wikingno worries15:52
@wikingi'm just trying to figure out15:52
@wikinghow both of us can benefit15:52
@wikingbecause having this benchmarks15:52
@wikingwould be great for all of us15:52
@wikingnot just shogun or mlpack15:52
rcurtinyes absolutely15:52
rcurtinbut we should keep in mind that the sustem is (or should be) easy to deploy and run15:52
@wikingother option is to use mesos :P15:53
@wikingand docker images15:53
rcurtinso it should not be 100% nevessary to run on the same hardware15:53
@wikingi just want to be able to run this15:53
rcurtindocker might be useful here because getting the roght dependencies can be a pain15:53
@wikingon more than one machine15:53
@wikingi have configs for having the build deps for shogun15:53
@wikingit took me a good day15:53
rcurtinit would be easy to write a dockerfile for this, maybe I will do this sometime15:54
@wikingto define that for fedora, centos, debian,ubuntu15:54
@wikingrcurtin: i would highly advise to use for this15:54
rcurtinI have other setup to do with our build server first though :(15:54
@wikingthat way you can generate a docker image15:54
@wikingas well as aws ami15:54
@wikingas well as virtualbox image15:54
@wikingso it's a good tool for such things15:54
@wikingso once you define your config15:54
@wikingyou are golden15:55
@wikingfor any type of 'instance'15:55
rcurtinthough I will not lie, I have little excitement about learning a new tool :)15:55
@wikingyeah i can understand15:56
@wikingbut this way you can hit more birds with 1 stone15:56
shogun-buildbotbuild #12 of xenial - libshogun is complete: Failure [failed test]  Build details are at  blamelist: Viktor Gal <>15:56
rcurtinthat is true15:56
shogun-buildbotbuild #2874 of bsd1 - libshogun is complete: Failure [failed configure]  Build details are at  blamelist: Viktor Gal <>15:56
@wikingrcurtin: do you guys use eigne?15:56
rcurtinback to the original question, what can I do to help with openstack?15:57
rcurtinno, we use armadillo15:57
@wikingrcurtin: i'm still investigating15:57
rcurtinI don't want to commit to using openstack o produce pur benchmarks15:57
@wikingi was just wondering if you'd be interested15:57
@wikingin this type of collaboration15:57
rcurtin*to produce our15:57
rcurtinbut I do want to help15:57
rcurtinand if we can share our configurations this I think would be a good way to go15:58
rcurtinthe thing to keep in mind is, it is very likely at some point we will disagree on how the benchmarks should be done15:59
rcurtinthere is already an issue by the elki maintainer to this matter16:00
rcurtinhang on16:00
rcurtingetting off the train16:00
rcurtinwill finish the thought shortly16:00
@wikinghehehe ok16:00
-!- leagoetz [] has joined #shogun16:03
-!- sanuj [~sanuj@] has quit [Ping timeout: 272 seconds]16:06
OXPHOSHeikoS: sry I didn't log out last night16:08
rcurtinso as more libraries become inteested in this more people will have opinions and resolving disputes will become intractable16:08
rcurtindisputes like "do we include I/O" or "what do we set the parameters to for these datasets"16:09
OXPHOSHeikoS wiking: linalg:
rcurtinbecause these little questions can make big differences for which library looks best16:09
rcurtinso I think we should try to avoid these altogether by taking the approach of "you should set this system up yourself and run it in your own setting!" and this should help avoid a lot of long discussions16:09
rcurtinwiking: what do you think?16:09
@wikingjust a sec16:12
@wikingwas doing some other shit16:12
rcurtinsure, no hurry.  I am at my office now so I can type a little faster :)16:12
@wikingyep sounds reasonable16:12
rcurtin is an example of what I mean16:13
rcurtinit started as a simple request and discussion, then suddenly turned into "you should be doing your k-means benchmarking completely differently"16:14
@wikingah fuck16:15
@wikingok nerd war16:15
rcurtinyeah basically that is what it was, I want to avoid more of those where possible :)16:15
rcurtinit sounds to me, like maybe what would be most useful might be a config (I guess I will learn it...) that generates images with the benchmarking system installed?16:16
rcurtinthis way you can just deploy one of these images on openstack, and I can deploy them on my antiquated desktops and live in the past :)16:16
-!- c4goldsw [5da420e6@gateway/web/cgi-irc/] has joined #shogun16:17
@lambdayOXPHOS: hello16:17
rcurtinI think that at some point I would like to compare the variability of runs on openstack and on my desktop, but that should be pretty easy once the system is set up... just run some benchmark ten times or so over a week or something like this, then compare variance16:17
@lambdayOXPHOS: how's it going?16:17
rcurtinand if the variance is very small I will shut up and maybe move to openstack :)16:18
OXPHOSlambday Hey! I add something here:
OXPHOSlambday: I have tested the base-derived class in shogun independent module and I'm now writing dot test case16:19
@lambdayOXPHOS: yeah it would be awesome to have a running demo :)16:21
@wikingOXPHOS: what happens if you dont have HAVE_VIENNACL ?16:22
@lambdayOXPHOS: BTW I was thinking that the thing that wiking suggested, would have been easily doable if we'd have used CUDA16:22
@wikingi mean the problem here is that16:22
@lambdayI was wondering whether we can do the same with opencl+viennacl but I am afraid we cannot16:22
@wikingthis thing will be part of the modular fw16:22
@wikingand that case you dont care HAVE_VIENNACL16:22
@wikingbecause either you have it or dont16:23
@wikingthat means if you dont = you dont have the .so for doing that16:23
@wikingand actually the most interesting part16:24
@wikingabout the gpu backand16:24
@wikingwill we have16:24
@lambdaywiking: without viennacl, there is no way we can define a memory in GPU, that, when viennacl **IS** present, can utilize16:24
@wikingand viennacl_opencl .so?16:25
@lambdaywiking: with CUDA we could have a simply int* gpu_vec...16:25
@wikinglambday: but you dont get it right?16:25
@wikingyou dont care about that16:25
@lambdaywith viennacl it is not possible, cause it uses cl_mem16:25
@wikingyou dont need that ifdef16:25
@wikingit should be an opaq pointer16:25
@wikingnot used in the header16:25
@wikingbecause either you can do shit with it16:26
@wikingbecause you have the backend16:26
@wikingand then you compile16:26
@HeikoSlisitsyn:  around?16:26
@wikingor you dont have the backend16:26
@wikingand then you dont have the .so16:26
@wikingor you have the .so16:26
@wikingbut it throws back a16:26
@wiking'fuck i dont have a gpu/opencl compatible shit'16:26
@lambdaywiking: I agree.. this of course should be a pimpl16:27
@wikingshould be an opaque pointer16:27
@wikingthat is only actually used16:27
@wikingin .cpp16:27
@lambdaywiking: agreed!16:27
@wikingok i'm back to debugging fucking qemu16:27
@lambdayOXPHOS: you got the idea?16:27
OXPHOSlambday: the GPUvector subclass need a opaque pointer?16:28
@lambdayOXPHOS: yep!16:28
@wikinglambday: and actually16:28
@wikingthink about how we are gonna support16:28
@HeikoSwiking: got another question16:28
@HeikoStypedef double float64_t;16:28
@HeikoSso this is fine?16:28
@wikingdifferent gpu pointers16:29
@wikingHeikoS: not at all16:29
@HeikoSit is in common.h16:29
@wikingHeikoS: then remove it16:29
@HeikoScan you pls check?16:29
@HeikoSthere is more of that kind16:29
@lambdaywiking: actually we're trying to solve a problem that viennacl guys did already.. they already work with (1) cpu memory (2) opencl gpu memory and (3) cuda memory..16:29
@HeikoSwiking: the reason I touched this is kind of totally unrelated to the sense or nonsense of such definitions16:29
@lambdaytheir vectors and matrices work with all16:30
@lambdaywe're trying to solve the same problem..16:30
@HeikoSso I kind of want to sidestep getting into that for now, but just make the meta examples work16:30
@wikinglambday: yep, buuuut how you wanna solve16:30
@wikingwhen somebody implements16:30
@HeikoSso I wonder, shall we put this as an issue, resolve all of them at once at some point?16:30
@wikinga magma based linalg backend? :)16:30
@wikingthat should be a reasonable idea no? :)16:30
@lambdaywiking: have a different subclass that does the thing..16:30
@wikingsubclass :P16:30
-!- c4goldsw [5da420e6@gateway/web/cgi-irc/] has quit [Quit: - A hand crafted IRC client]16:30
@lambdaywiking: and then have the factory return **that** type when asked for a gpu type16:31
@wikingi mean of GPUVector?16:31
@wikingbecause yeah subclass of gpu backend16:31
@wikingyes yes16:31
@wikingbut i mean16:31
@wikingthe interface16:31
@wikingsince we want to be opaque like this16:31
@wikingthat GPUBackend16:31
@wiking= whichever16:31
@wikingand it'll have the right types16:31
@wikingHeikoS: looking aroudn16:31
@lambdaywiking: I think subclasses of GPUVectors would have to do then..16:32
@lambdayor maybe, have the impl class use the #ifdef's16:32
@lambdaybut then in a platform where multiple of these are available, then we can only use a single one.. so... not good idea!16:33
@wikinglambday: but no16:33
@wikingyou dont care if multiples are avaialb16:33
@wikingyou will have n different .so16:34
@wikingthat implement a gpu backend16:34
@wikingthe user will define it16:34
@wikingwhich one you wanna use16:34
@wikingand in worst case that backend will16:34
@wikingthrow a runtime exception16:34
@wikingon registration16:34
@wikingwhen it checks that actually that shared library16:34
@wikingcan access the right hw or not that it requires to run properly16:35
@wikingsee what i mean?16:35
@wikingand actually this is why we would like to have a config file now for shoung16:36
@wikingwhere you can define these globals16:36
@wikinglike: where are all the shared libraries16:37
@wikingor what's the gpu/cpu backend you wanna use16:37
@wikingwhich of course you can override at runtime16:37
@wikingjust having a default via a conf would be good16:37
@lambdaywiking: can you please clarify what we're supposed to do in registration?16:38
@lambdaydoes it sort of specify that -- this is the *.so that we wanna use16:38
@wikingthis is the .so you will use16:38
@wikingand that should be loaded16:38
@lambday( [XXX=viennacl]16:38
@lambdayand so on16:39
@wikingand then you dlopen that16:39
@wikingregister it in your linalg backend16:39
@wikingi mean you can do this16:39
@HeikoSwiking: whats your thoughts on common.h?16:39
@wikingwhile you do init_shogun()16:39
@wikingbecause we have that shit anyways16:39
@wikingHeikoS: yeah i'm looking16:39
@wikingHeikoS: about the float16:40
@wikingthe ctags i dont know16:40
@wikingbut dont do what you did in your pr16:40
@HeikoSyeah sure16:40
@wikingbecause that's not true16:40
@HeikoSlooking for a better way16:40
@lambdaywiking: alright.. so about implementing it this way, we got to have different subclasses of GPUVectors I assume.. the ones that uses viennacl will be compiled and dumped into that and so on16:41
@lambdayand for that, we have to keep the GPUvector generic.. no impl dependency16:41
@lambdaywiking: is that what you mea16:41
@wikinglambday: why do you have to have different subclass?16:42
@wikingyou can just have different implementation of it16:42
@lambdaywiking: okay I see what you mean16:42
@wikingbecause if you use GPUVector of viennacl backend16:43
@wikingthat linalg backend will implement the gpu 'interface'16:43
@wikingand the backend will know what the opaque pointer actually stands for16:43
@lambdaywiking: yeah16:44
@wikingbut yeah each gpu backend should have it's own gpuvector16:44
@wikingand these are not interchangable16:44
@lambdaywiking: yeah and that's what the different impls will have to take care of16:44
@wikingbut the interface (header) should be the same16:44
@wikingyep yep16:44
@lambdaywiking: yeah I totally agree16:44
@lambdayOXPHOS: ^ :)16:45
OXPHOSlambday: trying to understand16:45
@lambdayOXPHOS: see the main thing here is to develop this architecture..16:45
@lambdayand dot is our lab-rat16:45
@lambdayOXPHOS: so the idea is - GPUVector should be more of an interface of how we communicate with gpu vectors of different impls16:46
OXPHOSlambday: it seems the idea of *GPUbackend in linalg class can stays the same - and the GPUvector will be like GPUbackend16:47
@wikinglambday: my only concern is a bit that branch in the dot() function16:47
@lambdaywiking: what do you mean?16:47
@wikingwell you know that part16:48
OXPHOSlambday: I was thinking users can specify what GPUbackend they want to use by passing different instance of GPUbackend16:48
OXPHOSlambday: but realized now GPUbackend ONLY works for ViennaCL16:48
@wikingwhere you check the SGVectors type,.... where it is stored16:48
@wikinglambday: it might be a good idea to have for the dot function16:49
@wikinga flag16:49
@wikingdot(vectorA, vectorB, type)16:50
@wikingwhere type is16:50
@wiking= AUTO16:50
@wiking= GPU16:50
@wiking= CPU16:50
@wikingand by default the type is AUTO16:50
@wikingthis way you can avoid that branching in the code16:50
@wikingi mean i know it's just couple of instructions16:51
@wikingbut we were bithching about the overhead of virtual16:51
@lambdaywiking: actually I thought of handling it via virtuals yesterday :D16:51
@lambdaywiking: by a way that the vector impls *know* how to do dot on themselves16:51
@lambdaybut then we won't need the linalg::dot(...) method at all..16:52
@lambdaybecause one can always do16:52
@lambdayauto a = factory::getgpuinstance(a)16:52
@lambdaywhere a is the base class of the vector type16:53
@lambdaya is *of16:53
@lambdaywiking: that eliminates the branching16:54
@lambdaybut the downside is, all linalg operations are member methods now16:54
@lambdaykinda like the linearoperator, linearsolver and eigensolver interface we have16:54
@lambdaywiking: if having it this way is okay API-wise, then we can have this instead16:55
@lambdayI mean, eigen does it anyway16:55
@lambdaywiking: what do you think? ^^16:57
@wikingjust a sec16:58
@wikingtrying to figure out why the hell qemu is shit on jessie16:58
OXPHOSlambday: how can we have different implement in the same GPUVector(class)? (Just generally asking)17:01
@lambdayOXPHOS: we have have that using pimpls17:02
@lambdayOXPHOS: say, for example, for viennacl type of vectors, we need an array of vcl::opencl::mem_handle to store the vector17:02
@lambdayOXPHOS: and for CUDA, we store it with simple T*17:02
@lambdaybut in the header, we don't declare shit17:03
@lambdaywe just say, here, have a pointer to an impl.. which knows how to do all those stuff17:03
@lambdayand then we define the impl in the cpp, not in *.h17:03
@lambdayso.. in the cpp, based on different flags, we write code..17:04
OXPHOSso we have a pointer unique_ptr<GPUV> pt, but GPUV can be different based on flags in GPUV .cpp17:04
@lambdaywhen we compile, we can have different *.o based on those flags17:04
@lambdayOXPHOS: precisely :)17:05
OXPHOSlambday: so we still need flags - maybe for the whole class - thought about sth fancier :P17:05
@lambdayOXPHOS: yes but we push the flags to the cpp17:05
@lambdayOXPHOS: the header is flag free17:05
OXPHOSlambday: got it17:06
@lambdayand, inside the header, we don't have to say how to store it.. because the storage is different17:06
@lambdayso that solves the problem :)17:06
OXPHOSlambday: and the same idea for the GPUBackend class?17:07
@lambdayOXPHOS: let me check what we had in there17:09
@lambdayOXPHOS: actually, this class can just have a bunch of static methods I think17:11
@lambdayumm but yeah maybe we need the pimpl there also17:12
OXPHOSlambday: yes seems it can ONLY support one type of GPU module17:12
@lambdayI mean, we won't implement anything in the header anyway17:12
@lambdayyeah this would need some polishing17:12
@lambdaybut let me hear wiking on having it the other way we discussed yesterday... that is, the linalg operations are a part of the vector/matrix classes thmselves..17:13
OXPHOSlambday: sure17:13
@lambdaya->dot(b) instead of dot(a, b)17:13
@lambdayI am trying to think of operations where this type of API can pose problems17:14
@lambdaywe already have eigensolver and linear solver that works with this type of interface17:15
OXPHOSit looks ugly..17:16
OXPHOSbut nevermind :)17:17
@lambdayOXPHOS: haha the thing?17:21
OXPHOSlambday: yeah..[feels like] all methods are in SGVector class. But since a, b here are specifically designed for linalg, I guess it's fine.. XD17:24
@lambdayyeah a and b are not SGVector/SGMatrix at all17:25
@lambdaybut yeah i see what you mean17:26
-!- sanuj [~sanuj@] has joined #shogun17:26
@lambdaywe just want to see whether the if-else in every linalg call is a good idea or not..17:26
@lambdaycause if we have it this way, then we save the branching17:26
shogun-buildbotbuild #247 of deb1 - libshogun - PR is complete: Failure [failed git]  Build details are at  blamelist: karlnapf17:31
@HeikoSsanuj: I think I ixed it17:32
@HeikoSsanuj: see my pr17:32
@HeikoSsanuj: I now forbid the use of "Int" "Real", but rahter use "int" "real"17:34
@HeikoSthese are basic types and no include path is checked for them17:34
@HeikoSso no problems with ctags17:34
@HeikoSsanuj: waiting for travis17:35
sanujHeikoS, how does it find stuff if you include nothing?17:37
@wikingHeikoS: dunno wtf is happening with git17:38
@wikingfatal: Could not parse object 'f5adee722739e5a892ee968bc51461c5b907236a'.17:38
@wikingfatal: Couldn't find remote ref refs/pull/3205/head17:38
shogun-buildbotbuild #248 of deb1 - libshogun - PR is complete: Success [build successful]  Build details are at
sanujlisitsyn, where to add 'any' in shogun?17:44
lisitsynsanuj: lib17:44
@HeikoSwiking: yeah git is delayed17:46
@HeikoSwiking: when I pushedto my fork17:46
@HeikoSit didnt show up on the page for a few mins17:46
@HeikoSthats why I closed and re-opened old PR17:46
@HeikoSsanuj: sorry what?17:54
@HeikoSsanuj: I now just treat basic types as such, so no include lookup is being done in c++17:54
sanuji see18:06
sanujHeikoS, i'll update my pr once you merge yours18:07
@HeikoSsanuj: had to change something and restart travis18:07
@HeikoSbut should do it now18:07
@HeikoSsanuj: only thing you need to change is to use lower case basic type names18:08
sanujyes and rebase18:08
@HeikoSyep exactly18:08
@HeikoSsanuj: how are things going with the tags?18:08
sanujshifting them into shogun, couldn't do much18:09
sanujdoing it now18:09
sanujand i'm reading sergey's tags implementation from aer18:10
sanujit has better code18:10
lisitsynit has the best code ever18:12
lisitsynbecause it is written by the boss18:12
sanujlisitsyn, ;)18:12
@HeikoSlisitsyn: ^18:13
@HeikoSlisitsyn dance18:14
@HeikoSdoesnt work18:14
@HeikoSthought you might say sorry18:14
sanujlisitsyn, shall i put Tag also in lib?18:20
lisitsynsanuj: yeah probably18:20
lisitsynthe difference between base and lib is vague18:21
lisitsynbut probably it's lib18:21
sanujlisitsyn, any is in aer::impl::any18:26
sanujany reason for impl?18:26
lisitsynno not really18:26
sanujshall i just keep shogun::any18:26
sanujoh actually its a bit different18:30
sanujthe policies are in impl18:31
sanujany is in shogun itself18:31
sanujso no problem18:31
lisitsynyeah feel free to flatten namespaces18:32
sanujlisitsyn, and the comment stuff that is there in every code file18:32
sanujthe licensing part18:33
lisitsynwhat comment stuff?18:33
sanujand written copyright etc18:33
lisitsynwhat license is it?18:33
sanujother shogun code has gpl18:33
lisitsynkeep it bsd18:36
sanujHeikoS, build is failing18:40
sanujknn failed18:41
sanujthe integration tests18:41
sanujHeikoS,  we'll have to change the data18:42
@HeikoSI think thats it18:43
@HeikoSmight have to change more than that18:43
@HeikoSsince I changed types18:43
@HeikoSchecking locally18:43
@HeikoSmake test worked for me a bit earlier18:43
@HeikoSyeah its the data18:47
sanujfine then18:47
@HeikoSsanuj: updated18:51
@HeikoSthe others should also fail18:51
@HeikoSthough maybe not18:52
@HeikoSno "Int" definitions were in them18:52
@HeikoSbut lets wait for travis18:52
sanujHeikoS, others didn't fail last time18:53
@HeikoSbetter wait, you never know what might go wrong :)18:53
@HeikoSdont wanna break the build in gsoc18:53
sanujcorrect :)18:53
@HeikoSannoying for everyone18:53
-!- besser82 [~besser82@fedora/besser82] has joined #shogun18:59
-!- mode/#shogun [+o besser82] by ChanServ18:59
sanujHeikoS, for BSD what comment do i need to add in the beginning of a new code file19:03
@wikingsanuj: check this file: src/shogun/ensemble/CombinationRule.h19:04
@wikingas an example19:04
@HeikoSadd you name19:05
@HeikoS^ this one has encoding, as suggested by besser8219:05
@wikingah yeah19:05
@wikingthat one was actually a shitty one19:05
@wikingthat was gpl19:05
@wikingsanuj: sorry19:05
sanujokay thanks19:05
sanuji'm adding sergey's name too19:09
sanujlisitsyn, what email19:12
-!- leagoetz [] has quit []19:14
lisitsynsanuj: lisitsyn.s.o@gmail.com19:19
@HeikoSsanuj:  merged19:25
sanuji'll rebase and update19:26
shogun-buildbotbuild #3745 of deb1 - libshogun is complete: Failure [failed compile]  Build details are at  blamelist: Heiko Strathmann <>19:26
shogun-buildbotbuild #661 of trusty - libshogun - viennacl is complete: Failure [failed test]  Build details are at  blamelist: Heiko Strathmann <>19:27
@HeikoSbuild broken19:28
shogun-buildbotbuild #662 of trusty - libshogun - viennacl is complete: Success [build successful]  Build details are at
@HeikoSdeb1 - libshogun wasnt broken by my commit actually19:31
-!- besser82 [~besser82@fedora/besser82] has quit [Ping timeout: 260 seconds]19:32
@HeikoSshogun-buildbot: force build --branch=develop 'deb1 - libshogun'19:36
shogun-buildbotbuild forced [ETA 12m53s]19:36
shogun-buildbotI'll give a shout when the build finishes19:36
shogun-buildbotbuild #3746 of deb1 - libshogun is complete: Success [build successful]  Build details are at
-!- besser82 [~besser82@fedora/besser82] has joined #shogun19:50
-!- mode/#shogun [+o besser82] by ChanServ19:50
-!- sanuj [~sanuj@] has quit [Ping timeout: 240 seconds]20:05
-!- HeikoS [] has quit [Quit: Leaving.]20:31
-!- besser82 [~besser82@fedora/besser82] has quit [Ping timeout: 252 seconds]20:34
-!- besser82 [~besser82@fedora/besser82] has joined #shogun20:41
-!- mode/#shogun [+o besser82] by ChanServ20:41
-!- lambday [8028b10a@gateway/web/freenode/ip.] has quit [Ping timeout: 250 seconds]20:56
@wiking\o/ we have a f23 aarch64 bot21:38
arianepaolawiking: I would like to try to integrate the nightly rpm build into that bot21:42
@wikingwhat do you need from me/21:42
@wikingwe have 2 fedora bots21:43
@wikingone is f22 the other is f2321:43
@wikingand now this aarch64 f2321:43
arianepaolawiking: if you have a base config, I can merge the snippet that I posted to the fedora issue21:44
@wikingoh those steps21:45
@wikingi can try to add21:45
@wikingcan you give me the pr21:45
-!- shogun-buildbot [] has quit [Quit: buildmaster reconfigured: bot disconnecting]21:55
-!- shogun-buildbot [] has joined #shogun21:55
@wikingshogun-buildbot: force build --branch=develop 'FC23 - libshogun - aarch64'21:56
shogun-buildbotbuild #0 forced21:56
shogun-buildbotI'll give a shout when the build finishes21:56
@wikinglets see21:56
@wikinglemme see21:58
shogun-buildbotbuild #0 of FC23 - libshogun - aarch64 is complete: Failure [failed compile]  Build details are at
-!- besser82 [~besser82@fedora/besser82] has quit [Ping timeout: 264 seconds]22:25
-!- besser82 [~besser82@fedora/besser82] has joined #shogun22:26
-!- mode/#shogun [+o besser82] by ChanServ22:26
-!- thoralf [] has joined #shogun22:34
-!- mode/#shogun [+o thoralf] by ChanServ22:34
@thoralfHey *22:34
shogun-buildbotbuild #13 of xenial - libshogun is complete: Failure [failed test]  Build details are at  blamelist: Viktor Gal <>22:37
shogun-buildbotbuild #2875 of bsd1 - libshogun is complete: Failure [failed configure]  Build details are at  blamelist: Viktor Gal <>22:38
arianepaolabye everyone, see you all tomorrow22:52
-!- travis-ci [] has joined #shogun23:10
travis-ciit's Heiko Strathmann's turn to pay the next round of drinks for the massacre he caused in shogun-toolbox/shogun:
-!- travis-ci [] has left #shogun []23:10
--- Log closed Thu May 26 00:00:12 2016