Open in new window / Try shogun cloud
--- Log opened Sun Apr 29 00:00:21 2012
-!- PhilTillet [] has joined #shogun00:07
-!- PhilTillet [] has quit [Remote host closed the connection]00:41
-!- pluskid [~pluskid@] has joined #shogun03:32
pluskidsonney2k: I have no concern about new SGVector/SGIVector now, after reading last night's IRC log03:35
-!- emrecelikten [~emre@] has joined #shogun04:59
-!- emrecelikten [~emre@] has quit [Ping timeout: 265 seconds]06:33
-!- emrecelikten [~emre@] has joined #shogun06:38
-!- pluskid [~pluskid@] has quit [Quit: Leaving]07:15
-!- emrecelikten [~emre@] has quit [Ping timeout: 246 seconds]07:44
-!- n4nd0 [] has joined #shogun07:48
n4nd0good morning07:48
-!- gsomix [~gsomix@] has joined #shogun07:50
-!- n4nd0 [] has quit [Quit: Ex-Chat]08:00
-!- n4nd0 [] has joined #shogun08:00
gsomixgood morning08:04
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]08:28
-!- Marty28 [] has joined #shogun08:33
-!- n4nd0 [] has joined #shogun08:41
-!- Marty28 [] has quit [Quit: Colloquy for iPad -]08:44
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]08:46
-!- Priyans [~Priyans@] has joined #shogun09:26
-!- Priyans [~Priyans@] has left #shogun []09:28
-!- gsomix [~gsomix@] has quit [Ping timeout: 246 seconds]09:31
-!- gsomix [~gsomix@] has joined #shogun09:45
-!- wiking [] has joined #shogun10:17
-!- wiking [] has quit [Changing host]10:17
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun10:17
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]10:27
-!- blackburn [~qdrgsm@] has joined #shogun11:49
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]12:31
-!- wiking [] has joined #shogun12:31
-!- wiking [] has quit [Changing host]12:31
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun12:31
CIA-64shogun: Sergey Lisitsyn master * r549fddc / (7 files in 3 dirs): Moved rejectionstrategy -
shogun-buildbotbuild #793 of libshogun is complete: Failure [failed compile]  Build details are at  blamelist: blackburn91@gmail.com12:37
shogun-buildbotbuild #222 of nightly_all is complete: Failure [failed compile]  Build details are at
wikingblackburn: wazza u broke the ssyyyyystem!12:41
blackburnwiking: heh12:44
CIA-64shogun: Sergey Lisitsyn master * rda1d7b8 / (2 files): Fixed includes in external libs -
-!- shogun-buildbot [] has quit [Ping timeout: 246 seconds]12:55
-!- shogun-buildbot [] has joined #shogun12:59
wikingocas fix needed: : Exception in thread "main" java.lang.RuntimeException: Out of memory error, tried to allocate -12627258368 bytes using malloc.13:02
blackburnshogun-buildbot has bad health lately13:12
CIA-64shogun: Sergey Lisitsyn master * r1455284 / src/shogun/multiclass/MulticlassOCAS.cpp : Fixes memory allocation at ocas -
-!- pluskid [] has joined #shogun14:29
pluskidI defined  shogun::CECOCStrategy::CECOCStrategy(CECOCEncoder *,CECOCDecoder *)14:32
pluskidand exported to python_modular14:32
pluskidbut in the python code, when constructing a CECOCStrategy object14:33
pluskidwith a subclass of CECOCEncoder and a subclass of CECOCDecoder14:33
pluskidit complains that NotImplementedError: Wrong number or type of arguments for overloaded function 'new_ECOCStrategy'.14:33
blackburnpluskid: hmm strange14:34
pluskidblackburn: do you know why?14:34
pluskidI also tested in C++ with libshogun14:34
blackburnlet me check14:34
pluskidseems OK14:34
pluskidthis example is in C++, runs without problem:
blackburnpluskid: all looks ok.. let me double check :D14:38
pluskidblackburn: thanks! I'm really confused14:38
blackburnpluskid: try to add14:39
blackburnclass CECOCEncoder;14:39
blackburnclass CECOCDecoder;14:39
blackburnright before definition of CECOCStrategy14:39
blackburnhowever this error looks strange14:40
pluskidthe full output is this:
pluskidblackburn: OK this time...14:43
blackburnpluskid: it looks like he didn't get that your en- and de-coders are descendants14:43
pluskidblackburn: you cool!14:44
pluskidSWIG sucks!14:44
wikingsomething changed15:07
wikingi've rebased the shogun repo and now i'm getting better accuracy15:08
blackburnwiking: ???15:08
wikingyeah go figure :)15:08
blackburnwiking: what is the timerange?15:08
wikingmmm good question15:09
blackburnclassifier? features?15:09
wikingSat Apr 1415:09
wikingi've rebased till here15:09
wikingcommit c12e96c7f3c662512ebd1054fda4a89bcdc4bdca15:09
wikingMerge: 2f00592 d716ae015:09
wikingAuthor: Heiko Strathmann <>15:09
wikingDate:   Sat Apr 14 13:27:52 2012 -070015:09
CIA-64shogun: Chiyuan Zhang master * r1cac205 / (2 files in 2 dirs): Fix SWIG error of not recognizing class hierarchy. (+5 more commits...) -
wikingbut i cannot recall15:09
wikingwhat was the previous stage15:09
blackburnwiking: uh I'd rather pray the problem is solved already :D15:10
wikingbut it's funny with the CombinedDotFeatures15:11
wikingi've created the combineddotfeatures with HKM15:11
wikingand the result is much worse15:11
wikingthan simply joining the features15:11
wiking the15:11
wikingn do a hkm15:11
wikingbut i'm using the same him15:11
blackburnwiking: no that shouldn't be15:12
blackburnit is the same15:12
wikingblackburn: the difference is HUGE15:12
wiking0.19 vs 0.8015:12
wikingaccuracy wise15:12
blackburnwiking: it should be normalization issue then15:13
blackburnor something like that15:13
wikingblackburn: what i'll try now is that simply do a hkm on each feature15:13
wikingand then15:13
wikingconcatenate the feature15:13
wikingand see if that makes a difference15:13
blackburnwiking: are you sure weighting is the same?15:14
wikingblackburn: weightening? :)15:14
blackburnwhy so? weighting sounds ok :D15:15
wikingblackburn: how do you weight? :)15:15
blackburnwiking: there are weights for each feature instance..15:16
wikingi'm using simple features<float64_t>15:17
wikingso where do u set the weight? :))15:17
blackburnwiking: set_subfeature_weights does the job15:18
blackburnhowever it needs to be obviously patched to make possible to use it from modular15:18
wikingset_subfeature_weights(org.shogun.SWIGTYPE_p_double,int) in org.shogun.CombinedDotFeatures cannot be applied to (double[],int)15:21
blackburnexactly what I said15:23
wikinglets see a quick fix for this :015:24
blackburnwiking: just replace it with SGVector<float64_t>15:26
wikingthat does not exits in java modular15:27
wikingbut DoubleMatrix is equivalent for the story15:27
wikingor at least it was15:28
blackburnwiking: huh it looks like you do not know how this mapping works ;)15:34
wikingwhich :)15:34
blackburnwiking: what do you mean?15:35
wikingwhich mapping :)15:35
blackburnisn't it working in case of SGVector?15:35
blackburnSGVector -- DoubleMatrix15:35
wikingwell sgvector doesn't exists per se in java15:35
blackburnSGMatrix -- DoubleMatrix15:35
blackburndoes it exist in python?15:35
wikingbut yeah doubleMatrix was till now sufficient in case when it was expecting an SGVector15:36
blackburnwiking: it should be sufficient right now too :)15:36
wikingstill bitching15:39
blackburnwiking: I'll commit fix in a min15:53
-!- n4nd0 [02893bbe@gateway/web/freenode/ip.] has joined #shogun15:58
wikingblackburn: yo this is now really strange15:59
wikingthe combined dot vs concatenating story15:59
-!- n4nd0_ [] has joined #shogun16:00
wikingso now one additional thing. if i take the features, do the HKM on it one by one, but after that i don't use the combineddotfeatures but rather just simply concatenate the features16:00
wikingi'm getting very similar results to the one where first i concatenate all the features and then HKM it16:00
wikingthere's something wrong with combineddotfeatures i'm guessing16:01
CIA-64shogun: Sergey Lisitsyn master * rccbdee1 / (3 files in 2 dirs): Fix for set subfeatures weights -
-!- pluskid [] has quit [Ping timeout: 272 seconds]16:06
blackburnwiking: I don't mind you fixing it ;)16:06
wikingheheh i have no time for it today16:06
wikingbut i'll look into it tomorrow16:06
blackburnyeah and it is not critical16:06
blackburnlooks ok, no idea what can be wrong16:07
-!- pluskid [~pluskid@] has joined #shogun16:08
wikingit looks as if it only uses one feature16:09
wikingi mean feature_obj16:09
blackburnuh code is pretty cumbersome16:09
-!- n4nd0 [02893bbe@gateway/web/freenode/ip.] has quit [Quit: Page closed]16:14
-!- n4nd0_ is now known as n4nd016:14
-!- n4nd0 [] has quit [Quit: Ex-Chat]16:24
-!- n4nd0 [] has joined #shogun16:24
-!- wiking_ [] has joined #shogun16:45
-!- wiking_ [] has quit [Changing host]16:45
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun16:45
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 265 seconds]16:49
-!- wiking_ is now known as wiking16:49
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]16:49
-!- wiking [] has joined #shogun16:51
-!- wiking [] has quit [Changing host]16:51
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun16:51
-!- emrecelikten [~emre@] has joined #shogun17:32
-!- pluskid [~pluskid@] has quit [Quit: Leaving]17:36
-!- emrecelikten [~emre@] has quit [Quit: Leaving.]17:41
-!- emrecelikten [~emre@] has joined #shogun17:44
-!- emrecelikten [~emre@] has quit [Quit: Leaving.]17:53
-!- emrecelikten [~emrecelik@] has joined #shogun17:54
-!- emrecelikten [~emrecelik@] has quit [Client Quit]17:55
-!- emrecelikten [~emrecelik@] has joined #shogun18:13
-!- emrecelikten [~emrecelik@] has quit [Client Quit]18:14
-!- emrecelikten [~emrecelik@] has joined #shogun18:18
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]18:47
-!- n4nd0 [] has joined #shogun19:00
-!- karlnapf [] has joined #shogun19:36
-!- in3xes [~in3xes@] has joined #shogun19:44
@sonney2kkarlnapf, you ok too with the SGVector / SGIVector split?19:46
karlnapfsonney2k, tell me more about it19:47
@sonney2kkarlnapf, we agree on that we need some kind of SGVector that allows shared memory, i.e. one can call get_vector multiple times19:49
@sonney2kand garbage collection takes care of cleaning things up19:50
@sonney2kkarlnapf, right?19:50
karlnapfso you want to create a vector class based on SGObject?19:50
karlnapfsonney2k, yes, agreed19:50
@sonney2kso that means we need sth like we have w/ SGObject: refcount, thread locking, and potentially some flag that deleting is not required at all (just some ptr to some memory somewhere)19:51
@sonney2kkarlnapf, so I would create an SGVector that just has all the vector functions and a ptr to the data19:51
@sonney2ke.g. double*, int19:51
@sonney2k(for vector data, length of vector)19:52
blackburnI have doubts we need it at all..19:52
karlnapfsonney2k, when we last talked about it you were against the vector class, what made you change your mind?19:52
@sonney2kkarlnapf, the overhead19:52
karlnapfbut its still there isnt it?19:53
@sonney2kSGIVector will cost like 20 bytes or so more than SGVector19:53
@sonney2kso for low-dim data that is *huge*19:53
karlnapfmmh, so now the idea is to split things so that the overhead is only there when the ref-couting is really needed19:54
@sonney2k(8 bytes for ptr, >8 bytes for potential virtual functions, 4 bytes for refcount, 1 byte for boolean flag)19:54
@sonney2kkarlnapf, yeah - but almost everywhere we use SGIVector19:54
@sonney2konly when performance matters *a lot* we don't19:55
karlnapfseems like a good idea to me19:55
karlnapfmy only fear is the transition19:55
karlnapfin many cases in the code, its not automatically doable19:56
karlnapfbecause memory managment is done around the SGVector class19:56
@sonney2kkarlnapf, yes I know19:57
@sonney2krough road19:57
karlnapfand since our tests dont cover all the code where SGVector are used .....19:57
@sonney2kbut the hack we have currently makes things soo difficult19:57
karlnapfI encountered this with the subsets19:58
@sonney2kkarlnapf, that is why I want gsomix to do this asap19:58
karlnapfthey are always copied currently19:58
blackburnsonney2k: I am sorry to ask but what is the problem we can't solve right now?19:58
@sonney2kand then after a few weeks this transitions hsould be over19:58
@sonney2kblackburn, as I said shared vectors19:58
karlnapfasap is good19:58
@sonney2kclass A has a vector foo19:58
@sonney2kyou do a->get_vector()19:59
@sonney2kin some class B19:59
@sonney2kand in some class C19:59
@sonney2kwho is supposed to free foo now?19:59
@sonney2kwe have lots of issues with uselessly copying labels19:59
@sonney2kbut we could just have passed ptrs19:59
blackburnsonney2k: do we have such problem right now?20:00
blackburnA B C case20:01
karlnapfblackburn, its not really a problem, but the longer we wati with this, the more complicated it gets to change the system20:01
karlnapfin the new subset system this shared inices would be extremely useful20:01
@sonney2kblackburn, karlnapf btw - I don't mind if we do a different naming like SGVector being the one with refcounting20:01
@sonney2kblackburn, yes20:01
blackburnI feel really unhappy with all these things20:01
@sonney2kwith labels this appears everywhere20:01
@sonney2kblackburn, which things/20:02
blackburnsonney2k: this sgvector hell :)20:02
@sonney2kblackburn, what do you propose?20:02
blackburnsonney2k: nevermind I am just crying :D20:02
@sonney2kblackburn, just recall last years gsoc20:03
@sonney2kdouble* int -> sgvector was painful20:03
gsomixsonney2k, moin. I will console blackburn.20:03
@sonney2kthis here is piece of cake20:03
blackburnwhat? what will you do with me?20:03
blackburnsonney2k: it was clear what to do20:04
@sonney2kblackburn, it is now too - we just need an SGVector with refcounts20:04
karlnapfblackburn, sonney2k, I think this will be easier than last year20:06
blackburnsonney2k: is feature_iterator overhead significant?20:06
@sonney2kblackburn, huge20:07
karlnapfhowever, its important that it is done with care. Last year we missed some cases, but this wasnt a problem. One can fix these on the fly nowadays, but with the ref-counting we might run into unexpected memory issues if we forget cases20:07
@sonney2kthat is why I don't use it :)20:07
blackburnsonney2k: ehmm dotfeatures one?20:07
@sonney2kblackburn, yeah20:07
@sonney2kif possible don't use it20:07
blackburnsonney2k: how?20:07
@sonney2kblackburn, try to express things with add* *dot20:08
blackburnI need to support both dense and sparse here (S in SLEP stands for sparse)20:08
@sonney2kkarlnapf, yes that is true20:08
blackburnsonney2k: I need A'*x operation..20:08
blackburni.e. feature row multiplied with vector20:08
blackburnsonney2k: any other suggestions?20:09
@sonney2kblackburn, dense_dot?20:09
@sonney2kor is A sparse too?20:09
blackburnsonney2k: A is feature matrix20:09
@sonney2kis x dense?20:09
blackburnsonney2k: yes20:09
@sonney2kthen dense_dot20:09
karlnapfsonney2k, blackburn, gotta go, will be back later this evening20:09
@sonney2kkarlnapf, thansk for your opinion20:10
blackburnsonney2k: dense dot performs dot product of each vector with given vector20:10
blackburnkarlnapf: ok see you later20:10
karlnapfbye :)20:10
blackburnsonney2k: I need to multiply each *row* of feature matrix20:10
blackburnerr to product20:10
blackburnwith vector of size num_vectors20:10
blackburndense_dot does different job20:11
@sonney2kblackburn, because it is transposed you mean?20:11
blackburnsonney2k: yes20:11
@sonney2kthen store the data transposed20:11
blackburnsonney2k: I need both kind of operations..20:11
blackburnAx and A'x20:12
@sonney2kthen no idea - one of these operations will be dog slow20:12
blackburnsonney2k: we have feature iterator in liblinear crammer-singer20:12
blackburnworks pretty well though20:12
blackburnsonney2k: hmm it can be done with dot probably?20:15
@sonney2kblackburn, yeah but I introduced this really only to be able to run things - it is not fast20:15
@sonney2kblackburn, the memory access pattern is horrible for dense20:15
@sonney2kfor sparse I don't even know how to properly do it20:15
@sonney2kgsomix, do you have time to work on the SGVector stuff *now*20:15
@sonney2kgsomix, or when would you have time?20:16
@sonney2kkarlnapf, btw I suspect mostly double free errors ... and the memleaks we can figure out with --trace-mallocs20:17
@sonney2kso it shouldn't be too bad20:17
blackburnsonney2k: can it be dense add?20:17
blackburnn_vectors additions?20:18
@sonney2kblackburn, we have add_to_dense_vec if you mean that20:18
@sonney2kblackburn, not sure what you are asking20:18
blackburnsonney2k: I broke my head - looks like Ax can be done with n_vectors calls of add_to_dense_vec20:18
-!- karlnapf [] has quit [Ping timeout: 276 seconds]20:18
gsomixsonney2k, right now? Do I have one hour? Some case. ?)20:19
@sonney2kblackburn, ???20:20
@sonney2kgsomix, 1 hour to finish you mean :D20:20
@sonney2kgsomix, I would suggest to (for now) just add all the refcount features to SGVector20:21
@sonney2kthen add a int* ptr for the refcount20:21
@sonney2kmake the class destructor virtual20:21
@sonney2kand copy the ref/unref stuff over from SGObject20:21
blackburnsonney2k: yes it can be done with add_to_dense_vec20:22
@sonney2kblackburn, explain what you want to do please?20:22
blackburnsonney2k: Ax20:22
blackburnA - feature matrix20:22
blackburnx is a vector of n_vectors*1 size20:23
@sonney2kand x any double* ?20:23
blackburngiven double vector20:23
gsomixsonney2k, huh, ok. :]20:23
blackburnA'x is a dense dot range20:23
blackburnand Ax is a sum of dense add20:23
blackburnthat's I was asking :)20:23
@sonney2kblackburn, I cannot follow on the Ax part20:24
blackburnsonney2k: each ith feature vector is multiplied with x[i]?20:25
@sonney2kgsomix, I think there is no way other than commiting this refcount enabled SGVector then and fixing all the breakage with everyone else together20:25
@sonney2kblackburn, ahh20:25
blackburnsonney2k: was not clear for me how to use add here - that's why I was asking :)20:26
blackburnso I guess I need to patch liblinear20:26
blackburnto use it too20:26
blackburnsonney2k: is it much faster?20:26
@sonney2kblackburn, I can tell that this will be hundred times faster20:26
blackburnsonney2k: ooh different issue there..20:27
blackburncrazy indices20:27
@sonney2kw_i == x?20:28
blackburnsonney2k: it is in liblinear20:28
blackburnin my slep code it is simple20:28
blackburnand can be done with add20:28
blackburnbut in liblinear it is painful..20:28
@sonney2kbecause of some subindices20:28
blackburnsonney2k: btw do you think shrinking is possible for libocas?20:29
blackburnlike in liblinear20:29
@sonney2kblackburn, ocas usually needs <100 iterations20:29
@sonney2kso what would you want to shrink?20:29
blackburnsonney2k: update only subset of ws20:30
@sonney2kIIRC that was not the most time consuming operation20:31
blackburnyeah actually yes20:31
blackburnmost time consuming is outputs20:31
@sonney2kand that is even done in parallel (IIRC)20:31
blackburnsonney2k: yes my main concern is that it is still slower20:32
blackburnsonney2k: than liblinear's coordinate descent20:32
@sonney2kblackburn, no way to fix it ...20:32
blackburnbad bad20:33
@sonney2kfor 2 classes it can be faster at times20:33
@sonney2kit certainly is more accurate optimization wise20:33
blackburnsonney2k: results are equal but times slower :(20:33
@sonney2kbut that's about it20:33
blackburnsonney2k: about SLEP - do you think I can already put a gpl header and include files at least to my fork?20:33
@sonney2kblackburn, did they answer?20:34
blackburnnothing after you did20:34
blackburnbut intention is clear20:34
blackburnsonney2k: may be I should suggest to do it by myself and give them patched version?20:34
blackburnI admit they seems to be not very good in development stuff..20:35
@sonney2kblackburn, good idea - sent them a patch with the added copyright and then continue20:35
blackburnsonney2k: have you seen the code?20:35
@sonney2kn4nd0, btw how are you exams going? how many do you have left?20:36
blackburnsonney2k: my intention was to include it as is but it looks I have to patch it seriously20:37
blackburnit is easier still though20:37
blackburnthan do it from the paper :D20:37
@sonney2kblackburn, yeah well it is not too bad I would say20:38
blackburnsonney2k: they do loop unrolling manually sometimes and other stuff I do not really like20:38
blackburnand codestyle is random20:38
blackburntotally random20:38
@sonney2kcertainly better than svmlight code :P20:38
blackburnhmm let me check svmlight20:38
@sonney2kblackburn, yeah and twonorm is not an extra function20:38
blackburnsonney2k: svmlight seems to be not that bad..20:39
@sonney2kblackburn, what?20:39
blackburnno forget what I said20:40
@sonney2ka lot worse20:40
blackburnwtf is going on20:40
blackburnsonney2k: what a detailed naming20:41
blackburnqp->opt_g0[i]=(learn_parm->eps[key[i]]-(float64_t)label[key[i]]*c[key[i]]) + qp->opt_g0[i]*(float64_t)label[key[i]];20:41
blackburnI have no idea how to handle this code20:41
* sonney2k has to stop watering plants before I have to swim home20:42
blackburnsonney2k: are you watering plants and chat?20:42
gsomixsonney2k - master20:43
n4nd0sonney2k, for the moment I have just one left on the 7th20:46
n4nd0sonney2k, so I can come back to the work :)20:47
n4nd0Shogun work I mean20:47
blackburnn4nd0: btw if you are free please consider fix doc warnings for JLCoverTree20:47
n4nd0blackburn, all right20:48
@sonney2kgsomix, yes?20:49
@sonney2kgsomix, you can also call me $DEITY20:49
@sonney2kn4nd0, yeah that doc fix would be great20:49
@sonney2kpretty annoying to see these warnings all the time20:49
@sonney2kblackburn, yeah I am sitting outside in the garden20:50
blackburnsonney2k: heh that should be nice20:50
@sonney2kgsomix, so whats up?20:51
gsomixsonney2k, thanks for new word in my vocabulary20:52
gsomixsonney2k, I'm ready.20:52
@sonney2kgsomix, $ you mean :D20:52
@sonney2kgsomix, you did SGVector you mean?20:52
gsomixsonney2k, yep.20:52
@sonney2kok, please show me20:53
gsomixsonney2k, bjjj, I'm ready to work, I mean.20:53
@sonney2kgsomix, hehe20:53
@sonney2kthat is why I was asking20:53
gsomixok, I see20:54
@sonney2kgsomix, then please go ahead. btw we don't need the sgvector& stuff / typemaps20:54
@sonney2kif this all works ok20:54
@sonney2kgsomix, btw the other thing that is needed and very related to sgvector20:55
@sonney2kis the remove Array/Array2/Array3 etc and replace it with SGVector/Matrix ...20:55
@sonney2kgsomix, but please one step at a time20:56
@sonney2kI will investigate which issues this will create...20:57
@sonney2kand of course there are some :/20:59
@sonney2kCArray2<CPlifBase*> PEN_state_signals21:00
@sonney2kCArray3<T_STATES> psi21:00
shogun-buildbotbuild #224 of nightly_none is complete: Failure [failed compile]  Build details are at
@sonney2kahh T_STATES is just uint16_t or so21:01
@sonney2kno problem w/ that21:01
gsomixblackburn, ?? ???????, ??????.21:02
gsomixsonney2k, small question. Should I add a new macroses for ref/unref? This macroses in SGObject works with pointers.21:03
CIA-64shogun: Sergey Lisitsyn master * rdc9d421 / src/shogun/classifier/svm/QPBSVMLib.cpp : Fixed wrong include -
@sonney2kgsomix, investigating21:04
CIA-64shogun: Sergey Lisitsyn master * reff395d / src/shogun/classifier/svm/SVMLight.cpp : Fixed wrong include -
@sonney2kgsomix, urgs that sucks... SG_REF(&vec) seems to be the only half way consistent thing21:06
blackburnsonney2k: will somebody fix HMM? :D21:08
@sonney2kgsomix, so lets just do it like that, if we prefer SG_VREF macros later (I don't think so) we can add them at some stage21:08
@sonney2kblackburn, what happened?21:08
blackburnsonney2k: parallel training21:09
@sonney2kblackburn, that is super tough21:09
blackburnyes I am too scaried to even take a look21:09
@sonney2kso I guess not21:09
blackburnnice :D21:09
@sonney2kthe code for the parallel stuff in HMM is so messy that I can hardly understand it21:10
@sonney2kthe only way to fix it is to compare outputs at each iteration of a parallel vs. sequential HMM21:10
@sonney2k*ly run HMM21:11
@sonney2kblackburn, do you get emails on all pull requests comments in github?21:12
@sonney2kbecause I for some reason only get some21:12
blackburnsonney2k: no, only for ones I had discussion in21:12
@sonney2kblackburn, I am just now reading harshits help request21:12
blackburnsonney2k: yes I did not receive it too21:13
@sonney2kand I think he might run things with different parameters21:13
blackburnsonney2k: yes I told that him before21:13
@sonney2kok let me reply in the pull request21:13
blackburnI always miss something21:13
CIA-64shogun: Sergey Lisitsyn master * r25b89ec / src/shogun/ui/SGInterface.cpp : Fixed wrong include -
blackburnmy commit messages are terribly different today21:14
@sonney2kblackburn, git clean -dfx and do a full compile...21:16
blackburnsonney2k: should be totally fixed now..21:17
@sonney2kor totally b0rken :D21:17
blackburnsonney2k: I did find shogun/classifier | xargs grep to find out wrong includes21:17
blackburnno idea how I missed svmlight but know why I missed ui21:17
blackburnsonney2k: actually you are closer to truth - nevertheless it compiles we are b0rken21:18
blackburnsonney2k: do you know what is SLEP for?21:19
shogun-buildbotbuild #798 of libshogun is complete: Success [build successful]  Build details are at
blackburnsonney2k: ^ ;)21:20
blackburnsonney2k: just fyi in particular we plan to integrate tree based stuff from slep21:20
blackburncases are - features have tree structure21:20
@sonney2kblackburn, do you have any suggestion what we could do about CArray2<CPlifBase*> PEN?21:20
blackburnor tasks have structure21:20
@sonney2ki see21:21
blackburnor even outputs have structure of tree21:21
@sonney2kyour tree labels21:21
blackburnsonney2k: let me check api of array2..21:21
@sonney2khow are trees encoded?21:21
blackburnsonney2k: internally some lists but I will build some api on top of that21:21
blackburnlist of indices actually21:21
@sonney2kblackburn, so you will create TreeFeatures ?21:22
blackburnsonney2k: noo21:22
blackburnit is only tree structure describing relations between features21:22
blackburncan be applied on top of any features21:22
blackburnI don't think we need to patch features for that21:22
@sonney2kblackburn, wait you mean within each feature vector - the relation between individual dims?21:23
blackburnsonney2k: yes21:23
@sonney2kblackburn, CArray2 is just a 2d-array21:23
blackburnyes I am checking21:23
blackburnwhy not to replace with sgmatrix?21:23
@sonney2kblackburn, yeah exactly21:23
blackburnah CPlifBase..21:23
blackburnsonney2k: would it be infeasible to patch to use flattened indexing?21:24
@sonney2kI just don't want this code duplicateion21:24
blackburnit looks funny in fact21:24
@sonney2kblackburn, no but it is nice21:24
blackburnif you would need 4d array21:24
blackburnwould it be Array4? :D21:24
blackburnsonney2k: then no idea how to redup code there21:25
blackburnI assume we deny non numeric in sg types21:26
@sonney2kblackburn, I simply want to get rid of Array{,2,3} and use the SGVec* stuff21:26
blackburnsonney2k: I thought you deny any pointers in SGVector?21:26
@sonney2kwe never needed 2d arrays so far IIRC21:27
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]21:28
@sonney2kand it creates huge overhead in swig interfaces21:28
@sonney2ksince we have to support all types21:28
@sonney2khmmhh what is this free_array for21:30
@sonney2kseems to be a nop21:30
blackburnsonney2k: which free_array?21:32
@sonney2kblackburn, I don't know if you like it but how about just moving this Array2/3 functionality into Dyn*Array21:32
@sonney2kI mean if we add some access operators called element(int i, int j, int k) ...21:32
blackburnsonney2k: I don't mind at all21:32
@sonney2kand other constructors accepting 2(or 3) indices21:33
@sonney2kit could just be inside the same thing21:33
blackburnto be honest I actually  do not like this infrastructure at all.. :D21:33
@sonney2kI mean not necessary to have 3 classes for that21:33
@sonney2kwhcih infrastructure?21:33
blackburnDynArray stuff, etc21:33
@sonney2kblackburn, the one in Array*21:33
blackburnI do not really understand why it is better than some stl stuff21:33
@sonney2kblackburn, great21:34
@sonney2kwhich stl stuff?21:34
blackburnsonney2k: why not to use std vector instead of dyn array?21:34
blackburnsonney2k: like in multiclass machines21:35
blackburnsonney2k: 2d 3d stuff can actually be done in place using macroses I think21:36
@sonney2kno macros21:38
@sonney2kjust functions21:38
blackburnsonney2k: functions of dyn arrays?21:39
-!- n4nd0 [] has joined #shogun21:40
-!- in3xes [~in3xes@] has quit [Read error: Connection reset by peer]21:45
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]21:51
blackburnIn file included from ../shogun/lib/memory.h:16:0,21:52
blackburn                 from ../shogun/lib/common.h:15,21:52
blackburn                 from ../shogun/features/DotFeatures.h:15,21:52
blackburn                 from ../shogun/lib/slep/slep_tree_lsr.h:15,21:52
blackburn                 from lib/slep/slep_tree_lsr.cpp:11:21:52
blackburn/usr/include/stdio.h:30:1: error: expected unqualified-id before string constant21:52
blackburnsonney2k: have you ever seen something like that?21:53
@sonney2kblackburn, yes but I don't remember21:54
blackburnsonney2k: could you please edit my name in your g+ post? ;)21:55
blackburnsonney2k: reason: forgotten ; in other included .h21:57
@sonney2kblackburn, Sergey?22:05
@sonney2kor sth else?22:05
blackburnyes, Lisitsyn22:05
blackburnSergej Lisityn currently22:05
blackburnI am ok with j but last name is wrong ;)22:06
@sonney2kSergey Lisitsyn then share it :D22:06
blackburnsonney2k: done ;)22:07
@sonney2kalright - I think adding CArray{1,2,3} to DynArray and DynamicArray should enable us to drop CArray* crap22:08
gsomixsonney2k, SG_REF(&vec) | hehe, but { if (x) { if ((x)->unref()==0) (x)=NULL; } }22:08
@sonney2kreduced compile time all good22:08
@sonney2kgsomix, yes?22:09
@sonney2kwhat is the problem?22:09
blackburnloses pointer lol22:09
@sonney2kblackburn, even worse impossible22:09
gsomixsonney2k, aha.22:10
@sonney2kgsomix, so SG_VREF it is then...22:10
blackburnyay! managed to compile first version of tree regularized shit22:10
blackburninstall: cannot stat `libshogun.a': No such file or directory22:10
blackburnI get this error sometimes22:10
blackburnwhat can be a cause?22:11
@sonney2knever seen it22:11
@sonney2kreally *never*22:11
blackburnsonney2k: it stays until I clean up..22:11
@sonney2kblackburn, sounds like some dependency in makefile is not ok. i guess only you have the qualification to produce this error :D22:12
* gsomix is trying to defuse a new laptop.22:12
blackburnsonney2k: autogenerated or set before?22:13
blackburnsonney2k: there are routines in slep that solving problem requiring W vector to be w1>w2>w3>...>wn :D22:14
-!- karlnapf [] has joined #shogun22:16
@sonney2kblackburn, you mean increasing sparsity or reducting norm?22:17
blackburnsonney2k: lasso problem *but* coefficients are decreasing22:18
@sonney2kok makes sense22:19
blackburnsonney2k: looks like it would be hard to promote sparsity when features have wrong in means of influence order..22:19
blackburnsonney2k: that's why slep is of interest not only in multitask scope22:19
blackburnthere are a lot of other problem solvers22:19
blackburnhowever they share same thing22:20
gsomixsonney2k, I need your review.22:29
-!- PhilTillet [] has joined #shogun22:36
PhilTillethello everybodyy22:38
@sonney2kgsomix, done22:39
gsomixsonney2k, aha.22:40
gsomixsonney2k, so what about void free_vector()? I think it should go.22:41
-!- karlnapf [] has quit [Ping timeout: 276 seconds]22:41
@sonney2kgsomix, yes indeed22:44
@sonney2kalso destroy_vector22:45
@sonney2kgsomix, ^22:45
gsomixsonney2k, and not needed to allocate m_refcount if do_free == false, isn't it?22:45
@sonney2kyes set to NULL and good22:46
@sonney2kbut rename do_free22:46
@sonney2kto what I said in comment22:46
@sonney2kgsomix, once this is done nothing will compile22:47
gsomixsonney2k, aha.22:47
@sonney2kgsomix, so the quick fix is to replace vec.destroy_vector() and vec.free_vector() with SG_VUNREF(vec)22:48
@sonney2konce this compiles we commit this22:48
@sonney2kand gradually (with help of hopefully many others) fix the rest22:48
@sonney2ks/rest/the crashers/22:48
PhilTilletsonney2k, is there any documentation I can find to get a sense of how svm_train() works? Is it SMO?22:50
PhilTilletblackburn, i've been super unclear? :p22:53
blackburnPhilTillet: svm_train() of?22:53
PhilTilletwait no22:53
PhilTilletI heard libsvm was based on SMO, but not exactly SMO22:53
@sonney2kPhilTillet, look at
@sonney2kwhich SVM do you mean :D22:54
blackburnwe have a few :D22:54
@sonney2kgsomix, I have to sleep now will be back tomorrow evening...22:55
PhilTilletyep now I have read some litterature about svm training, chunking, interior points and SMO :p I want to start with SMO first, but I do not really know whether this is what libsvm implements :p22:55
PhilTilletgood night sonney2k :)22:55
gsomixsonney2k, nite22:55
blackburnPhilTillet: are you familiar with dual / primal formulations?22:56
PhilTilletblackburn, well not super experienced but yes I know a bit about them22:56
blackburnformulations are the key I'd say22:56
blackburnPhilTillet: what do you want to learn exactly?22:57
PhilTilletI spent the last week reading a book about svms, but it is kinda complicated :p22:57
PhilTilletblackburn, nothing in particular, I want to start implementing something for OpenCL22:58
PhilTilletI thought about SMO cause Interior Points is somewhat not adapted to big datasets22:58
PhilTilletand chunking sounds outdated :p22:59
blackburnI think some gradient based methods would be more openclable22:59
PhilTilletwell, no challenge :p it's just about using opencl-compatible linalg libs :p23:00
PhilTilletI have some papers who cuda-ized smo23:00
PhilTilletand had some impressive improvements ( 50x ...)23:00
blackburnno surprise there :)23:02
PhilTilletseems like a good challenge and it can be very fruitful :p23:02
PhilTilletI just do not know where to start :p is SMO what is used in classifier_libsvm.cpp?23:03
blackburnPhilTillet: libsvm is smo type but not exactly smo23:09
PhilTilletit's what I needed :)23:10
PhilTilletthanks ^^23:10
gsomixgood night guys23:17
PhilTilletgood night:)23:17
-!- gsomix [~gsomix@] has quit [Ping timeout: 246 seconds]23:30
-!- blackburn [~qdrgsm@] has quit [Quit: Leaving.]23:38
--- Log closed Mon Apr 30 00:00:37 2012