﻿ Shogun Machine Learning - IRC Logs
 CIA-9 CIA-9 blackburn --- Log opened Mon Jun 04 00:00:41 2012 -!- heiko [~heiko@host86-181-155-21.range86-181.btcentralplus.com] has quit [Ping timeout: 260 seconds] 00:08 shogun: Heiko Strathmann master * rbe0a1f5 / (2 files): -added create_centering_matrix which returns a matrix that can be used - http://git.io/aeUyOw 00:33 shogun: Heiko Strathmann master * r7830e2e / (2 files): Merge pull request #564 from karlnapf/master - http://git.io/t9J37g 00:33 heiko: are you sure you need centering matrix? 00:35 -!- heiko [~heiko@host86-176-176-136.range86-176.btcentralplus.com] has joined #shogun 00:35 heiko: why do you need centering matrix? 00:36 blackburn hi 00:36 It makes things much easier 00:36 hi :) 00:36 in conjunction with my new matrix_multiply method 00:36 I just want to aware you it is inefficient 00:36 mmh, well I know 00:37 but I think in this case is negliable 00:38 since the expensive parts happen somewhere else 00:38 good night guys 00:38 what do you need to center? 00:38 and how? 00:38 n4nd0: good night 00:38 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] 00:38 kernel matrices 00:39 ok if C is centering matrix and K is kernel matrix 00:39 you need C K C? 00:39 K*C 00:40 K*C*L 00:40 for K and L kernel matrices 00:40 ??! 00:40 wow 00:40 ok then I do not really have idea how to do that better 00:41 IIRC C K C is CMath::center_matrix 00:41 yes I think its fine 00:41 really? 00:41 mmh 00:41 yes 00:41 Ill check 00:41 CKC is actually 00:41 subtract column mean 00:41 subtract row mean 00:41 and add grand mean 00:41 much faster 00:42 oh yes seeing it 00:42 thats actually better 00:42 -!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has quit [Remote host closed the connection] 00:48 -!- heiko [~heiko@host86-176-176-136.range86-176.btcentralplus.com] has quit [Ping timeout: 260 seconds] 00:55 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 01:02 -!- heiko [~heiko@host86-177-117-162.range86-177.btcentralplus.com] has joined #shogun 01:14 shogun: Heiko Strathmann master * r95228e9 / (2 files in 2 dirs): -added SGVector method for sum - http://git.io/rEuQ-w 01:29 shogun: Heiko Strathmann master * r2b27934 / (2 files in 2 dirs): Merge pull request #565 from karlnapf/master - http://git.io/kxlbyQ 01:29 hahh 01:30 heiko: I managed to cite Karl Popper :D 01:30 who is that? :) 01:30 wow I thought you should know him :) 01:30 german philosopher 01:31 heiko: VC capacity is related to Popper's falsifiability 01:31 what did he do? :) 01:31 really, hehe citing philosophy in ML dissertations? :) 01:31 yes 01:32 :D 01:32 just for fun 01:32 heiko: ok by Popper science knowledge should be falsifiable 01:32 i.e. there should be a case when this knowledge is false 01:32 I once wanted to cite a guy called Scott E. Fahlman 01:33 who invented the :-) sign 01:33 VC capacity of linear discriminant is d+1 you know 01:33 yes 01:33 oh this VC-stuff, I find it annoying 01:33 d+1 is the number of points *to falsify* linear classifier 01:33 to falsify in means of popper 01:33 all the bounds are non-tight and so philosophical :) 01:34 ah nice 01:34 thats a nice connection 01:34 there was a paper by vapnik and corfield describing relation between VC and falsifiability 01:34 yes 01:34 that's why I wanted to mention this 01:34 probably good, how is it going with your thing? 01:35 90% ready I think 01:35 https://dl.dropbox.com/u/10139213/ml/thesis.pdf 01:35 that's how it look like for now 01:35 heiko: 40 is my favourite number so I think 40 references would stay :D 01:37 hey, I can understand which topic some pages are about :) 01:37 that's easy 01:37 I think 01:37 ah sobel filters, I wonder what the russian word for that is :) 01:37 no that's prewitt 01:38 sobel is 1 3 1 01:38 oh yes 01:38 sorry :) 01:38 why are you writing is russian? 01:38 argh 01:38 1 2 1 01:38 heiko: I have no choice 01:38 really? 01:38 yeah I don't think it is possible 01:39 ok 01:39 btw I just saw that liblinear has now svr 01:39 thats really cool 01:39 yeah 01:39 and we already have it in shogun right? 01:39 yes - thanks to sonney2k@island 01:39 :D 01:39 I work on some regression too btw 01:40 -!- heiko1 [~heiko@host86-177-117-162.range86-177.btcentralplus.com] has joined #shogun 01:43 blackburn, argh connection troubles again :( 01:43 I gotta go to bed anyway ... 01:43 good night! 01:43 I just said I do som regression too 01:43 and currently I'm working on thing that learns linear regression models with task tree regularization 01:43 ah ok so thats nice for you too 01:43 should be cool 01:43 nice :) 01:43 not really 01:44 -!- heiko [~heiko@host86-177-117-162.range86-177.btcentralplus.com] has quit [Ping timeout: 260 seconds] 01:44 I implement other method 01:44 heh okay so see you tomorrow 01:44 take care! 01:44 bye 01:44 -!- heiko1 [~heiko@host86-177-117-162.range86-177.btcentralplus.com] has left #shogun [] 01:44 -!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has joined #shogun 01:46 build #600 of csharp_modular is complete: Failure [failed compile]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/csharp_modular/builds/600  blamelist: heiko.strathmann@gmail.com 01:59 -!- blackburn [~blackburn@31.28.59.65] has quit [Read error: Operation timed out] 02:27 -!- av3ngr [~av3ngr@122.110.222.137] has joined #shogun 02:33 -!- av3ngr [~av3ngr@122.110.222.137] has quit [Remote host closed the connection] 02:33 -!- puffin444 [62e3926e@gateway/web/freenode/ip.98.227.146.110] has quit [Quit: Page closed] 02:41 -!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has quit [Remote host closed the connection] 02:57 -!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has joined #shogun 03:22 -!- wiking_ [~wiking@208.76.55.198] has joined #shogun 04:48 -!- wiking_ [~wiking@208.76.55.198] has quit [Changing host] 04:48 -!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun 04:48 -!- wiking_ is now known as wiking 04:51 -!- wiking_ [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 04:54 -!- wiking_ [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 04:54 -!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun 04:54 -!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 256 seconds] 04:55 -!- wiking_ is now known as wiking 04:55 -!- av3ngr [~av3ngr@106.70.150.8] has joined #shogun 05:25 -!- av3ngr [~av3ngr@106.70.150.8] has quit [Read error: Connection reset by peer] 05:25 -!- av3ngr [av3ngr@nat/redhat/x-xhwxdalqowhhovrg] has joined #shogun 06:46 -!- av3ngr [av3ngr@nat/redhat/x-xhwxdalqowhhovrg] has quit [Client Quit] 06:50 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 07:04 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 07:29 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 07:29 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 07:29 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 07:33 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 260 seconds] 09:17 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 10:50 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 11:26 -!- blackburn [~blackburn@31.28.59.65] has joined #shogun 11:44 -!- pluskid [~pluskid@li400-235.members.linode.com] has joined #shogun 12:06 -!- pluskid [~pluskid@li400-235.members.linode.com] has quit [Client Quit] 12:11 -!- flxb [~cronor@fb.ml.tu-berlin.de] has joined #shogun 12:29 -!- foo__ [2e1fd566@gateway/web/freenode/ip.46.31.213.102] has joined #shogun 13:05 hi 13:05 I have some questions about "CommWordStringKernel" 13:06 what shall I use  SortWordString preprocessor ? 13:07 Is it possbile to fixe the k parameters for k-mer frequency in the spectrum kernel ? 13:07 -!- uricamic [~uricamic@cmpgw-27.felk.cvut.cz] has joined #shogun 13:13 -!- foo__ [2e1fd566@gateway/web/freenode/ip.46.31.213.102] has quit [Quit: Page closed] 13:58 -!- pluskid [~pluskid@111.120.32.87] has joined #shogun 14:11 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 14:18 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 14:18 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 14:18 -!- wiking [~wiking@huwico/staff/wiking] has quit [Client Quit] 14:18 -!- romi_ [~mizobe@187.66.121.115] has joined #shogun 14:30 -!- blackburn [~blackburn@31.28.59.65] has quit [Ping timeout: 260 seconds] 14:42 -!- pluskid [~pluskid@111.120.32.87] has quit [Ping timeout: 244 seconds] 15:01 -!- pluskid [~pluskid@202.130.113.138] has joined #shogun 15:01 -!- heiko [~heiko@dhcp-174-242.internal.eduroam.ucl.ac.uk] has joined #shogun 15:20 -!- nicococo [~nico@lacedcoffee.ml.tu-berlin.de] has joined #shogun 15:29 hey nicococo 15:29 hola dudes 15:29 how is it going? 15:30 well, its a bit chaotic (nips deadline, some peple visting, seminar preparation,...) but okay 15:30 how was your exams and how is the vanilla sosvm .. 15:31 the exam went good, I passed the course 15:32 regarding the vanilla sosvm 15:32 I have made some progress with the opt. algorithm 15:33 I think I should have it finished soon 15:33 have you seen the code? 15:33 not yet.. how can i download your changes again? 15:34 just check them in github if so: https://github.com/iglesias/shogun/blob/so/src/shogun/so/PrimalMosekSOSVM.cpp 15:36 so what I have left is the part of the constraints Ax <= b 15:37 to build A and b to give it to the QP solver 15:37 yep 15:38 is the max_slack selection correct? 15:39 lines 103-104? 15:39 (well i guess it is) the lines 99-105 yes.. 15:40 cur_list = (CList*) results->get_element(i); 15:40 each example has an own list, right? 15:40 yes 15:40 I discovered that yesterday .... 15:40 okay :) 15:40 the max_slack looks right to me 15:41 let me know what bugs you 15:41 yes..  i think you can skip lines 107-116 and just do the same thing in 120-126 15:42 (just init max_slack with -inf) 15:42 120-126 are there to handle the case when the lists are empty 15:43 for the first iteration of the outher do ... while 15:44 well, for me it would be nice to have this functionality at one place but its up to you ;) 15:45 do you mean the piece of code to insert? 15:45 ok ... I will take a look to it 15:46 its not really important.. 15:46 CPrimalMosekSOSVM::compute_loss_arg(CResultSet* result) 15:46 shouldn't it be part of the application ? 15:46 (i remember the coffin discussion) 15:47 mmm 15:48 I just defined this method as a shortcut 15:48 since we are doing quite a few of times that operation 15:48 taking into account coffing 15:48 coffin sorry :D 15:48 a propos: i need coffee.. 15:49 just wait 30sec ;) 15:49 ok 15:49 back again.. coffeinized 15:52 nice 15:52 now i can type twice as fast..yeah 15:53 so I think that using coffin strategy here 15:53 predicted_delta_loss(int32_t idx)  ?? 15:53 I think that will be useful to build A 15:53 let's get into that 15:55 so as I understand 15:55 well okay.. 15:55 these coding issues are rather small right 15:55 each of the CResultSets that are stored 15:55 lets gete it to work 15:55 okay 15:55 yeah ... is almost a matter of style 15:55 so each of the CResultSets are associated with a constraint right? 15:56 yes 15:56 and this structure is always growing in the algorithm 15:56 no element is removed from there 15:56 yes 15:56 no element is removed (BY NOW) 15:56 so the constraints that are introduced in the QP in the first iteration, will be used as well in the last one 15:57 there is a nice and simple heuristic 15:57 YES 15:57 ok 15:57 i know this sounds shitty 15:57 there will be thousands of constraints in A in the end 15:57 I see 15:58 and the solver will become slower and slower 15:58 but the argmax will still be the bottleneck 15:58 but as i mentioned there is a nice heuristic to remove inactive constraints before solving the qp 15:58 in most applications the argmax is the bottleneck 15:58 (being ~10 times slower than the optimizing part) 15:59 I see 15:59 I have checked a videolecture about how to handle cases when the argmax gets intractable 15:59 of course that depends on the application... 15:59 I guess that speeds things up 15:59 to use approximations for the argmax and the like 16:00 there is also a paper that states: why approximations don't wokr :) 16:00 sry, work 16:00 :D 16:00 it is not a good idea to use them then? 16:01 for some applications we have LINEAR time algos that solve the argmax 16:01 (HMM, CRF) 16:01 oops hmm for sosvm and crf 16:01 and yes, i find approximations very interesting.. 16:02 but thats another topic 16:02 yes 16:02 let's focus again 16:02 so I did some paper work 16:02 and got that A should be 16:02 A = [-dPsi_i(y) | -I_n] 16:03 -I_n is the identity matrix of size n; n = # training examples 16:03 that would be for the first iteration 16:03 when we have one constraint for each training example 16:04 b = - DeltaLoss(y_true, y_pred) 16:04 what about delta? 16:04 ohh sry.. :) 16:04 does it look ok? 16:04 in the first iteration you also have -delta for b, right? 16:05 isn't what I said ^? 16:05 < n4nd0> b = - DeltaLoss(y_true, y_pred) 16:05 or do you mean another thing? 16:05 (sounds a bit confusin): that would be for the first iteration 16:06 when we have one constraint for each training example 16:06 but okay.. we mean the same thing. 16:06 why does that sound confusing 16:06 ? 16:06 and yes, that sound right 16:06 i thought you divide into 2 cases: one is the first iteration and second all other iterations.. 16:07 anyway, lets move on 16:07 all right 16:08 so later 16:08 the new constraints that can be added for each iteration 16:08 I remember you told me that we may add one constraint per iteration 16:08 one constraint per iteration per example 16:09 but as I understand it now, it would be one constraint per each training example per iteration 16:09 all right 16:09 and do they look the same? 16:09 I mean with this 16:09 we add one constraint 16:09 then this implies a row in A that looks like 16:10 -dPsi_i(y_i_pred) | 16:10 the part to the right would be a vector of zeros everywhere 16:11 except from one position that is equal to 1 16:11 that position is i, the index of the training example 16:11 correct! :)  (with -1 or?) 16:11 ooo gotme! -1 :D 16:12 that sound absolutely correct to me, sire! 16:12 good 16:13 that means, we should now focus on the example application. 16:13 and of course one new value for b 16:13 (of course) ;) 16:13 yeah! I think I have clear how to this part 16:13 I will change a couple of things though 16:13 since right now it is prepared to load to MOSEK the full A and C matrices each iteration 16:14 and this is rather stupid in this case .... 16:14 the mosek part is quite big, right?  i thought it would be a simple quadprog(...) 16:14 haha yes.. kind of 16:15 how would you do the application? 16:15 you have to input to mosek sparse matrices and stuff like that 16:15 take a few lines ;) 16:15 ok 16:16 let's move to multiclass classification then 16:16 I read about it a couple of weeks ago and took notes about the points I didn't have clear 16:16 let's make it like this, we have just to focus on the application specific parts 16:17 the labels here are simple, I think I will use something like MulticlassLabels for this 16:17 argmax function? 16:18 argmax is super-simple: for (c=0;c we know the mapping phi 16:23 lets assume phi is id mapping : phi(x) = x 16:24 then psi(x,y) = [x;x;x] 16:24 or structured: psi = [x and y=0; x and y=1; x and y=2] 16:24 ok ... wait a moment here 16:25 I read that Psi(x,y) = phi(x) x Delta(y) 16:25 fuck , too many xs... 16:26 the x between phi and Delta is the tensor product 16:26 not exactly it is psi(x,y) = phi * kronecker-delta, right 16:26 in the paper it is written something like orthogonal encoding of label y 16:27 and I got a bit O_O 16:27 big words for small issue :) 16:27 agree :) 16:28 it is exactly like training 3 linear svms:   f_i(x) = w_i ^T phi(x)   /forall i 16:28 and put all together into a giant vector: f = argmax_i f_i(x) 16:30 f(x) = argmax_i w_i^T phi(x) 16:31 ok but 16:32 mmm 16:32 you said psi(x,y) = phi(x) * kronecker-delta 16:32 yes i said this and its bullshit i see :) 16:33 I guess that the kronecker-delta (let's call it d for short) must depend on y 16:33 something like d(y) 16:33 yes 16:33 ok, tell me how it depends on it 16:34 for me a kronecker delta is a vector of zeros and one value equal to 1 16:34 in short:  psi(x,y=i) = phi(x) 16:34 d(y) = 1 if y = 0, 0 otherwise 16:34 the kronecker is to select a certain phi(x) out of the psi-vector: psi(x,y) = [phi(x);phi(x);...] 16:35 ok, I got the idea 16:36 so I think I am just missing how to define phi 16:36 ohh.. okay thats the usual svm phi.. so we make it linear phi(x) = x 16:37 aham ok 16:37 structured loss? 16:39 correct = 0, incorrect = 1? 16:39 right :) 16:39 ok 16:39 then I have all the pieces 16:40 well it is a debug setting right... we don't want to win a competition ;) 16:40 true 16:40 ok 16:41 thats something we do afterwards 16:41 I will work on this within the next few days then 16:41 can you setup the application in python? 16:42 (generating the examples ...) 16:42 I like C++ but I will do it in python ;) 16:42 hurray 16:42 okay.. then lets talk on thursday again? 16:43 or wednesday is even better 16:43 so what about if I let you know by mail once I am done with the algorithm so you can visual-debug it? 16:43 let's make it on Wednesday then 16:43 how did it go with the NIPS deadline by the way? 16:44 what are you presenting? 16:44 submitted the paper 10minutes before deadline :) 16:44 what about? 16:44 application paper some l_1 density level set estimation 16:45 no idea what that is :O 16:45 like one-class learning 16:45 that doesn't make sense in my head 16:46 one-class classification :P 16:46 imagine you have a lot of unlabeled data (for instance network traffic) 16:47 you know that most of the examples are normal data (userr clicking on website,..) 16:47 but you also know that some of the examples are spurious (hackers) 16:47 a simple setting would be: learn a hypersphere that encloses most of the data 16:48 hence, learn a tight bound around a single class of examples excluding examples that deviate from normal behaviour 16:49 oh, looks interesting 16:49 machine learning applied to computer security right? 16:49 right ;) 16:50 (but for nips we had some brain-computer interface data) 16:50 what do you mean with brain-computer? 16:50 eeg-measurements of your brain activity 16:51 ok 16:51 http://www.bbci.de/ 16:51 quite interesting... but i like structured output more ;) 16:52 I think the computer security example looks nicer :) 16:52 what applications of SO do you know beyond CV? 16:53 bioinformatics, machine translation 16:53 part-of-speech tagging 16:53 speech recognition 16:54 mmm... and all upcoming interesting new applications ;) 16:54 have you seen something applied in communications? 16:55 I've not found anything yet 16:55 but I know that in decoding, in wireless communications some things are modelled with HMMs 16:56 so I guess there must be room to apply SO there 16:56 there are ic that do message passing all the time 16:56 what is its relation to SO? 16:57 to adjust the parameters (transmission and emission scores ) 16:57 i guess there is room for new inventions :) 16:58 I am interested in re-using my knowledge of SO for my thesis 16:58 mmhh.. its about communication? 16:59 I meant like the knowledge I'll have after the project 16:59 not that much probably but something :P 16:59 my degree is in telecommunications 16:59 what is the title? 16:59 my Spanish one 17:00 here in Sweden I'm studying robotics 17:00 i also studied communication and robotics :) 17:00 (but forgot everything :( ) 17:01 :O 17:01 ok 17:01 I should go now! 17:01 but we talk again on Wednesday, bye! 17:01 bye 17:01 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] 17:01 -!- nicococo [~nico@lacedcoffee.ml.tu-berlin.de] has left #shogun [] 17:01 -!- pluskid [~pluskid@202.130.113.138] has quit [Quit: Leaving] 17:13 -!- uricamic [~uricamic@cmpgw-27.felk.cvut.cz] has quit [Quit: Leaving.] 18:29 -!- emrecelikten [~emrecelik@176.40.245.70] has joined #shogun 18:35 -!- hoijui [~hoijui@141.23.65.251] has joined #shogun 18:44 -!- romi_ [~mizobe@187.66.121.115] has quit [Ping timeout: 250 seconds] 18:48 -!- romi_ [~mizobe@187.66.121.115] has joined #shogun 18:50 -!- heiko [~heiko@dhcp-174-242.internal.eduroam.ucl.ac.uk] has quit [Ping timeout: 256 seconds] 18:57 -!- heiko [~heiko@dhcp-174-242.internal.eduroam.ucl.ac.uk] has joined #shogun 19:02 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 19:19 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 19:19 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 19:19 -!- gsomix [~gsomix@85.112.35.132] has joined #shogun 19:23 hello 19:23 I passed theoretical mechanics exam \(^_^)/ 19:23 -!- flxb [~cronor@fb.ml.tu-berlin.de] has quit [Ping timeout: 260 seconds] 19:27 -!- blackburn [~blackburn@31.28.59.65] has joined #shogun 19:35 blackburn, yo 19:36 hey 19:36 wiking: gsomix: weekly reports please ;) 19:36 gsomix, aha 19:37 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 19:37 huh :) 19:37 just in time 19:37 -!- heiko [~heiko@dhcp-174-242.internal.eduroam.ucl.ac.uk] has left #shogun [] 20:09 -!- cronor [~cronor@g231231221.adsl.alicedsl.de] has joined #shogun 20:20 -!- hoijui [~hoijui@141.23.65.251] has quit [Quit: Leaving] 20:23 -!- gsomix [~gsomix@85.112.35.132] has quit [Quit: Ex-Chat] 20:58 build #250 of nightly_none is complete: Failure [failed compile]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/nightly_none/builds/250 21:01 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 21:16 build #259 of nightly_all is complete: Failure [failed compile]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/nightly_all/builds/259 21:17 -!- heiko [~heiko@host86-181-154-139.range86-181.btcentralplus.com] has joined #shogun 21:21 shogun: Soeren Sonnenburg master * rfc9099c / src/shogun/labels/Labels.h : fix documentation error - http://git.io/ly8B7Q 21:28 shogun: Soeren Sonnenburg master * ref62fca / (src/shogun/lib/SGVector.h src/shogun/lib/SGVector.cpp): split up SGVector into .cpp / .h file and only enable it for numerical types - http://git.io/_ynlYA 21:28 shogun: Soeren Sonnenburg master * reac5249 / (4 files in 2 dirs): drop GMM - it abuses SGVector to store Gaussians - http://git.io/U-4vug 21:28 shogun: Soeren Sonnenburg master * rba6bb44 / src/shogun/so/StructuredModel.h : fix error in formula - http://git.io/hGDiQw 21:28 drop?? 21:29 build #170 of nightly_default is complete: Failure [failed compile]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/nightly_default/builds/170 21:32 sonney2k: weren't you on holidays? :D 21:35 hmm drop gmm 21:37 blackburn: what's it? 21:38 or was it ... 21:39 gaussian mixture models 21:39 shogun: Heiko Strathmann master * rf6951d0 / (2 files): -finishing touches to the spectrum based null-distribution sampling - http://git.io/xuQsfQ 21:56 shogun: Heiko Strathmann master * r8836c0a / (2 files): Merge pull request #566 from karlnapf/master - http://git.io/a1-Bdg 21:56 build #938 of cmdline_static is complete: Failure [failed test_1]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/cmdline_static/builds/938  blamelist: sonne@debian.org 22:06 blackburn: btw, let me ask you something 22:09 general SVM theory :) 22:09 ask 22:09 the optimization problem for SVMs is convex 22:10 right 22:10 even when we are talking about soft margin 22:10 with slack variables and so on 22:10 then it exists a global minima of the function we seek to optimize 22:11 build #917 of r_static is complete: Failure [failed test_1]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/r_static/builds/917  blamelist: sonne@debian.org 22:11 yes, while problem is convex 22:11 ok 22:11 there should be a theorem I think 22:11 so all the different SVM solvers should give the same output right? 22:11 hmmmm 22:12 well due to numerical stuff not really the same 22:12 it is their efficiency what differs? 22:12 but yes they should mostly coincide 22:12 but does that turn to be a big difference? 22:12 no I don't think it is big 22:12 curious 22:12 I find that almost magical 22:12 why? 22:13 that there are a bunch of different algorithms out there giving the same solution! 22:13 but formulation is the same 22:13 out there = inside here in shogun :D 22:13 I find it funny, don't you? 22:14 well.. just different way to obtain solution 22:14 just like solving nonlinear equations 22:14 n4nd0: about convexity 22:15 imagine polygon in R^2 22:15 build #904 of python_static is complete: Failure [failed test_1]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/python_static/builds/904  blamelist: sonne@debian.org 22:15 and you want to find a point nearest to origin 22:15 can you stuck somewhere if it is convex? 22:16 with polygon are you thinking of a closed curve? 22:17 yes 22:17 something like an U closed in the upper part 22:17 linear piecewise 22:17 ? 22:18 then I guess you could get stuck in the corners 22:18 any area with linear piecewise boundary 22:18 can there be a case with corner which are nearer to origin than its neighbors? 22:19 which is* 22:19 mmm I think that depends on where the curve is 22:20 you can put it in such a way that one of the corners is the origin 22:20 _/\_ 22:20 like that? 22:20 that is not convex 22:20 yes 22:20 is it? 22:20 that's what I mean 22:20 not convex - so if convex you always can travel for better corner 22:21 and you will never be at local minima 22:21 ok 22:21 nice property of convexity 22:21 btw let me ask you now 22:22 is SO svm convex? 22:22 n4nd0: btw did you check logs where we were discussing VC capacity with heiko? :) 22:22 blackburn: no I didn't, was it today? 22:23 hard to remember.. probably or yesterday 22:23 blackburn: it must be convex, it is reduced to a QP 22:23 n4nd0: I became so interested with relations between Popper's falsifiability and VC dimension 22:23 I don't know if that is a good answer though... 22:23 I even asked my former philosophy teacher about that :D 22:24 however I had to describe what VC is to him 22:24 build #827 of octave_static is complete: Failure [failed test_1]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/octave_static/builds/827  blamelist: sonne@debian.org 22:24 n4nd0 check out the boyd and vanderberghe book 22:24 thats amazing, I recently bought it and its a pleasure to read 22:24 ah I awakened heiko 22:25 covers all the convexity/SVM/Lagrange stuff 22:25 yes :) 22:25 I am listeing all the time :) 22:25 heiko: yeah, I started with it :) 22:25 why did you buy it? 22:25 I am following it together with the lectures in youtube 22:25 :) 22:25 I also got some nice notes which are a good summary of the stuff you need for SVM 22:25 blackburn, well I like books :) 22:25 I prefer to read using my kindle dx - so awesome 22:26 n4nd0,  try lecture 7 of these: http://www.gatsby.ucl.ac.uk/~gretton/coursefiles/rkhscourse.html 22:26 I like paper :) 22:26 also, you can defend yourself with heavy books in contrast to kindle :) 22:26 oh food is ready 22:26 see you later! 22:26 heiko: bye, and thanks! 22:27 see you 22:27 I am an old guy in that sense too, I prefer the paper 22:27 I have not tried kindle though 22:27 I've heard is comfortable to read with it too 22:27 yes 22:28 dx is the first thing I bought with gsoc money :D 22:30 last year I mean 22:31 n4nd0: I'll start with boyd book after my thesis I think 22:34 I know *a little* and need to systemize 22:34 blackburn: cool 22:37 blackburn: we can talk about our respective progress 22:37 in cvxopt? 22:37 yeah 22:38 yeah 22:38 n4nd0, blackburn, re 22:41 I just had the exam about it, had to derive one-class nu-svm with soft margin :) 22:42 I also wanna get more into the stuff, very interesting, so lets talk about it at some point 22:42 sure 22:43 I talked about VC dim for an hour today with philosopher :D 22:45 -!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has quit [Read error: Connection reset by peer] 22:50 -!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has joined #shogun 22:59 -!- romi_ [~mizobe@187.66.121.115] has quit [Ping timeout: 245 seconds] 23:18 -!- romi_ [~mizobe@187.66.121.115] has joined #shogun 23:18 -!- emrecelikten [~emrecelik@176.40.245.70] has quit [Quit: Leaving.] 23:24 shogun: Heiko Strathmann master * r4400786 / src/shogun/statistics/QuadraticTimeMMD.h : added comment - http://git.io/ui0sdQ 23:24 shogun: Heiko Strathmann master * r2fbbd7a / (2 files): added inverse_incomplete_gamma for computing the CDF of the - http://git.io/aU7hyQ 23:24 shogun: Heiko Strathmann master * r816d12d / examples/undocumented/libshogun/statistics.cpp : added test to ensure high-quality results for new alglib routines - http://git.io/4OSF1Q 23:24 shogun: Heiko Strathmann master * r0d707d8 / (4 files in 3 dirs): Merge pull request #567 from karlnapf/master - http://git.io/kXv1bw 23:24 heiko: so now you are completely free? 23:25 blackburn, not yet :) 23:25 one more 23:25 wednesday 23:25 ah 23:25 another harder one 23:25 so tomorrow all day studying 23:25 what is it is about? 23:25 general machine learning algos 23:26 math, FA, GMM, HMM, ICA, LDS, optimisation, sampling, dimensionality reduction 23:26 oh wtf weren't ones you had about ML? 23:26 :D 23:26 mathematical basics of them 23:26 what? :) 23:26 heiko: you take really cool courses 23:27 you had exams about ML ML ML and ML 23:27 :D 23:27 n4nd0, thanks :) 23:27 yes 23:27 I mean thats what I am studying 23:27 and why I cam ehere 23:27 what is LDS?? 23:27 :D 23:27 in Germany I would have to take all this CS crap :) 23:27 linear dynamical systems 23:27 no idea 23:27 :) 23:27 kalman filter, auto-regressive models etc 23:27 argh? 23:27 why is it ml 23:28 pretty important and useful stuff 23:28 yes I know kalman and ARMA models 23:28 I mean this is all related to Bayesian methods 23:28 wait 23:28 kalman applied to bayesian inference? 23:28 kalman filter has a bayesian interpretation 23:28 oh gosh 23:29 btw the guy who did the lecture is david barber 23:29 he has a really nice book 23:29 which dim reduction algos do you study? 23:29 http://web4.cs.ucl.ac.uk/staff/D.Barber/pmwiki/pmwiki.php?n=Brml.HomePage 23:29 great book about ML and bayesian, theres nearly everything in it 23:29 we did mostly PCA 23:30 but in quite detail 23:30 also how to use it for matrix completition etc 23:30 I really think Isomap is must to know 23:30 Ill put it on my list :) 23:31 currently a bit short on time 23:31 blackburn, n4nd0, this course here was really great and I reccomend it without any doubts :) 23:32 too much studies for me 23:33 :) 23:33 yeah, its kind of a lot of work 23:35 but London is a lovely place to be :) 23:35 I will go offline now, guys, have a good evening :) 23:35 see ya 23:36 --- Log closed Tue Jun 05 00:00:41 2012