Open in new window / Try shogun cloud
--- Log opened Mon Jun 04 00:00:41 2012
-!- heiko [] has quit [Ping timeout: 260 seconds]00:08
CIA-9shogun: Heiko Strathmann master * rbe0a1f5 / (2 files): -added create_centering_matrix which returns a matrix that can be used -
CIA-9shogun: Heiko Strathmann master * r7830e2e / (2 files): Merge pull request #564 from karlnapf/master -
blackburnheiko: are you sure you need centering matrix?00:35
-!- heiko [] has joined #shogun00:35
blackburnheiko: why do you need centering matrix?00:36
heikoblackburn hi00:36
heikoIt makes things much easier00:36
blackburnhi :)00:36
heikoin conjunction with my new matrix_multiply method00:36
blackburnI just want to aware you it is inefficient00:36
heikommh, well I know00:37
heikobut I think in this case is negliable00:38
heikosince the expensive parts happen somewhere else00:38
n4nd0good night guys00:38
blackburnwhat do you need to center?00:38
blackburnand how?00:38
blackburnn4nd0: good night00:38
-!- n4nd0 [] has quit [Quit: leaving]00:38
heikokernel matrices00:39
blackburnok if C is centering matrix and K is kernel matrix00:39
blackburnyou need C K C?00:39
heikofor K and L kernel matrices00:40
blackburnok then I do not really have idea how to do that better00:41
blackburnIIRC C K C is CMath::center_matrix00:41
heikoyes I think its fine00:41
heikoIll check00:41
blackburnCKC is actually00:41
blackburnsubtract column mean00:41
blackburnsubtract row mean00:41
blackburnand add grand mean00:41
blackburnmuch faster00:42
heikooh yes seeing it00:42
heikothats actually better00:42
-!- zxtx [] has quit [Remote host closed the connection]00:48
-!- heiko [] has quit [Ping timeout: 260 seconds]00:55
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]01:02
-!- heiko [] has joined #shogun01:14
CIA-9shogun: Heiko Strathmann master * r95228e9 / (2 files in 2 dirs): -added SGVector method for sum -
CIA-9shogun: Heiko Strathmann master * r2b27934 / (2 files in 2 dirs): Merge pull request #565 from karlnapf/master -
blackburnheiko: I managed to cite Karl Popper :D01:30
heikowho is that? :)01:30
blackburnwow I thought you should know him :)01:30
blackburngerman philosopher01:31
blackburnheiko: VC capacity is related to Popper's falsifiability01:31
heikowhat did he do? :)01:31
heikoreally, hehe citing philosophy in ML dissertations? :)01:31
blackburnjust for fun01:32
blackburnheiko: ok by Popper science knowledge should be falsifiable01:32
blackburni.e. there should be a case when this knowledge is false01:32
heikoI once wanted to cite a guy called Scott E. Fahlman01:33
heikowho invented the :-) sign01:33
blackburnVC capacity of linear discriminant is d+1 you know01:33
heikooh this VC-stuff, I find it annoying01:33
blackburnd+1 is the number of points *to falsify* linear classifier01:33
blackburnto falsify in means of popper01:33
heikoall the bounds are non-tight and so philosophical :)01:34
heikoah nice01:34
heikothats a nice connection01:34
blackburnthere was a paper by vapnik and corfield describing relation between VC and falsifiability01:34
blackburnthat's why I wanted to mention this01:34
heikoprobably good, how is it going with your thing?01:35
blackburn90% ready I think01:35
blackburnthat's how it look like for now01:35
blackburnheiko: 40 is my favourite number so I think 40 references would stay :D01:37
heikohey, I can understand which topic some pages are about :)01:37
blackburnthat's easy01:37
blackburnI think01:37
heikoah sobel filters, I wonder what the russian word for that is :)01:37
blackburnno that's prewitt01:38
blackburnsobel is 1 3 101:38
heikooh yes01:38
heikosorry :)01:38
heikowhy are you writing is russian?01:38
blackburn1 2 101:38
blackburnheiko: I have no choice01:38
blackburnyeah I don't think it is possible01:39
heikobtw I just saw that liblinear has now svr01:39
heikothats really cool01:39
heikoand we already have it in shogun right?01:39
blackburnyes - thanks to sonney2k@island01:39
blackburnI work on some regression too btw01:40
-!- heiko1 [] has joined #shogun01:43
heiko1blackburn, argh connection troubles again :(01:43
heiko1I gotta go to bed anyway ...01:43
heiko1good night!01:43
blackburnI just said I do som regression too01:43
blackburnand currently I'm working on thing that learns linear regression models with task tree regularization01:43
heiko1ah ok so thats nice for you too01:43
blackburnshould be cool01:43
heiko1nice :)01:43
blackburnnot really01:44
-!- heiko [] has quit [Ping timeout: 260 seconds]01:44
blackburnI implement other method01:44
blackburnheh okay so see you tomorrow01:44
heiko1take care!01:44
-!- heiko1 [] has left #shogun []01:44
-!- zxtx [] has joined #shogun01:46
shogun-buildbotbuild #600 of csharp_modular is complete: Failure [failed compile]  Build details are at  blamelist: heiko.strathmann@gmail.com01:59
-!- blackburn [~blackburn@] has quit [Read error: Operation timed out]02:27
-!- av3ngr [~av3ngr@] has joined #shogun02:33
-!- av3ngr [~av3ngr@] has quit [Remote host closed the connection]02:33
-!- puffin444 [62e3926e@gateway/web/freenode/ip.] has quit [Quit: Page closed]02:41
-!- zxtx [] has quit [Remote host closed the connection]02:57
-!- zxtx [] has joined #shogun03:22
-!- wiking_ [~wiking@] has joined #shogun04:48
-!- wiking_ [~wiking@] has quit [Changing host]04:48
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun04:48
-!- wiking_ is now known as wiking04:51
-!- wiking_ [] has joined #shogun04:54
-!- wiking_ [] has quit [Changing host]04:54
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun04:54
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 256 seconds]04:55
-!- wiking_ is now known as wiking04:55
-!- av3ngr [~av3ngr@] has joined #shogun05:25
-!- av3ngr [~av3ngr@] has quit [Read error: Connection reset by peer]05:25
-!- av3ngr [av3ngr@nat/redhat/x-xhwxdalqowhhovrg] has joined #shogun06:46
-!- av3ngr [av3ngr@nat/redhat/x-xhwxdalqowhhovrg] has quit [Client Quit]06:50
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]07:04
-!- wiking [] has joined #shogun07:29
-!- wiking [] has quit [Changing host]07:29
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun07:29
-!- n4nd0 [] has joined #shogun07:33
-!- n4nd0 [] has quit [Ping timeout: 260 seconds]09:17
-!- n4nd0 [] has joined #shogun10:50
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]11:26
-!- blackburn [~blackburn@] has joined #shogun11:44
-!- pluskid [] has joined #shogun12:06
-!- pluskid [] has quit [Client Quit]12:11
-!- flxb [] has joined #shogun12:29
-!- foo__ [2e1fd566@gateway/web/freenode/ip.] has joined #shogun13:05
foo__I have some questions about "CommWordStringKernel"13:06
foo__what shall I use  SortWordString preprocessor ?13:07
foo__Is it possbile to fixe the k parameters for k-mer frequency in the spectrum kernel ?13:07
-!- uricamic [] has joined #shogun13:13
-!- foo__ [2e1fd566@gateway/web/freenode/ip.] has quit [Quit: Page closed]13:58
-!- pluskid [~pluskid@] has joined #shogun14:11
-!- wiking [] has joined #shogun14:18
-!- wiking [] has quit [Changing host]14:18
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun14:18
-!- wiking [~wiking@huwico/staff/wiking] has quit [Client Quit]14:18
-!- romi_ [~mizobe@] has joined #shogun14:30
-!- blackburn [~blackburn@] has quit [Ping timeout: 260 seconds]14:42
-!- pluskid [~pluskid@] has quit [Ping timeout: 244 seconds]15:01
-!- pluskid [~pluskid@] has joined #shogun15:01
-!- heiko [] has joined #shogun15:20
-!- nicococo [] has joined #shogun15:29
n4nd0hey nicococo15:29
nicococohola dudes15:29
n4nd0how is it going?15:30
nicococowell, its a bit chaotic (nips deadline, some peple visting, seminar preparation,...) but okay15:30
nicococohow was your exams and how is the vanilla sosvm ..15:31
n4nd0the exam went good, I passed the course15:32
n4nd0regarding the vanilla sosvm15:32
n4nd0I have made some progress with the opt. algorithm15:33
n4nd0I think I should have it finished soon15:33
n4nd0have you seen the code?15:33
nicococonot yet.. how can i download your changes again?15:34
n4nd0just check them in github if so:
n4nd0so what I have left is the part of the constraints Ax <= b15:37
n4nd0to build A and b to give it to the QP solver15:37
nicococois the max_slack selection correct?15:39
n4nd0lines 103-104?15:39
nicococo(well i guess it is) the lines 99-105 yes..15:40
nicocococur_list = (CList*) results->get_element(i);15:40
nicococoeach example has an own list, right?15:40
n4nd0I discovered that yesterday ....15:40
nicococookay :)15:40
n4nd0the max_slack looks right to me15:41
n4nd0let me know what bugs you15:41
nicococoyes..  i think you can skip lines 107-116 and just do the same thing in 120-12615:42
nicococo(just init max_slack with -inf)15:42
n4nd0120-126 are there to handle the case when the lists are empty15:43
n4nd0for the first iteration of the outher do ... while15:44
nicococowell, for me it would be nice to have this functionality at one place but its up to you ;)15:45
n4nd0do you mean the piece of code to insert?15:45
n4nd0ok ... I will take a look to it15:46
nicococoits not really important..15:46
nicococoCPrimalMosekSOSVM::compute_loss_arg(CResultSet* result)15:46
nicococoshouldn't it be part of the application ?15:46
nicococo(i remember the coffin discussion)15:47
n4nd0I just defined this method as a shortcut15:48
n4nd0since we are doing quite a few of times that operation15:48
n4nd0taking into account coffing15:48
n4nd0coffin sorry :D15:48
nicococoa propos: i need coffee..15:49
nicococojust wait 30sec ;)15:49
nicococoback again.. coffeinized15:52
nicococonow i can type twice as fast..yeah15:53
n4nd0so I think that using coffin strategy here15:53
nicococopredicted_delta_loss(int32_t idx)  ??15:53
n4nd0I think that will be useful to build A15:53
n4nd0let's get into that15:55
n4nd0so as I understand15:55
nicococowell okay..15:55
nicococothese coding issues are rather small right15:55
n4nd0each of the CResultSets that are stored15:55
nicococolets gete it to work15:55
n4nd0yeah ... is almost a matter of style15:55
n4nd0so each of the CResultSets are associated with a constraint right?15:56
n4nd0and this structure is always growing in the algorithm15:56
n4nd0no element is removed from there15:56
nicococono element is removed (BY NOW)15:56
n4nd0so the constraints that are introduced in the QP in the first iteration, will be used as well in the last one15:57
nicococothere is a nice and simple heuristic15:57
nicococoi know this sounds shitty15:57
nicococothere will be thousands of constraints in A in the end15:57
n4nd0I see15:58
nicococoand the solver will become slower and slower15:58
n4nd0but the argmax will still be the bottleneck15:58
nicococobut as i mentioned there is a nice heuristic to remove inactive constraints before solving the qp15:58
nicococoin most applications the argmax is the bottleneck15:58
nicococo(being ~10 times slower than the optimizing part)15:59
n4nd0I see15:59
n4nd0I have checked a videolecture about how to handle cases when the argmax gets intractable15:59
nicococoof course that depends on the application...15:59
n4nd0I guess that speeds things up15:59
n4nd0to use approximations for the argmax and the like16:00
nicococothere is also a paper that states: why approximations don't wokr :)16:00
nicococosry, work16:00
n4nd0it is not a good idea to use them then?16:01
nicococofor some applications we have LINEAR time algos that solve the argmax16:01
nicococo(HMM, CRF)16:01
nicococooops hmm for sosvm and crf16:01
nicococoand yes, i find approximations very interesting..16:02
nicococobut thats another topic16:02
n4nd0let's focus again16:02
n4nd0so I did some paper work16:02
n4nd0and got that A should be16:02
n4nd0A = [-dPsi_i(y) | -I_n]16:03
n4nd0-I_n is the identity matrix of size n; n = # training examples16:03
n4nd0that would be for the first iteration16:03
n4nd0when we have one constraint for each training example16:04
n4nd0b = - DeltaLoss(y_true, y_pred)16:04
nicococowhat about delta?16:04
nicococoohh sry.. :)16:04
n4nd0does it look ok?16:04
nicococoin the first iteration you also have -delta for b, right?16:05
n4nd0isn't what I said ^?16:05
n4nd0< n4nd0> b = - DeltaLoss(y_true, y_pred)16:05
n4nd0or do you mean another thing?16:05
nicococo(sounds a bit confusin): that would be for the first iteration16:06
nicococowhen we have one constraint for each training example16:06
nicococobut okay.. we mean the same thing.16:06
n4nd0why does that sound confusing16:06
nicococoand yes, that sound right16:06
nicococoi thought you divide into 2 cases: one is the first iteration and second all other iterations..16:07
nicococoanyway, lets move on16:07
n4nd0all right16:08
n4nd0so later16:08
n4nd0the new constraints that can be added for each iteration16:08
n4nd0I remember you told me that we may add one constraint per iteration16:08
nicococoone constraint per iteration per example16:09
n4nd0but as I understand it now, it would be one constraint per each training example per iteration16:09
n4nd0all right16:09
n4nd0and do they look the same?16:09
n4nd0I mean with this16:09
n4nd0we add one constraint16:09
n4nd0then this implies a row in A that looks like16:10
n4nd0-dPsi_i(y_i_pred) |16:10
n4nd0the part to the right would be a vector of zeros everywhere16:11
n4nd0except from one position that is equal to 116:11
n4nd0that position is i, the index of the training example16:11
nicocococorrect! :)  (with -1 or?)16:11
n4nd0ooo gotme! -1 :D16:12
nicococothat sound absolutely correct to me, sire!16:12
nicococothat means, we should now focus on the example application.16:13
n4nd0and of course one new value for b16:13
nicococo(of course) ;)16:13
n4nd0yeah! I think I have clear how to this part16:13
n4nd0I will change a couple of things though16:13
n4nd0since right now it is prepared to load to MOSEK the full A and C matrices each iteration16:14
n4nd0and this is rather stupid in this case ....16:14
nicococothe mosek part is quite big, right?  i thought it would be a simple quadprog(...)16:14
n4nd0haha yes.. kind of16:15
nicococohow would you do the application?16:15
n4nd0you have to input to mosek sparse matrices and stuff like that16:15
n4nd0take a few lines ;)16:15
n4nd0let's move to multiclass classification then16:16
n4nd0I read about it a couple of weeks ago and took notes about the points I didn't have clear16:16
n4nd0let's make it like this, we have just to focus on the application specific parts16:17
n4nd0the labels here are simple, I think I will use something like MulticlassLabels for this16:17
n4nd0argmax function?16:18
nicococoargmax is super-simple: for (c=0;c<CLASSES;c++) ... take max_c16:20
nicococomore details??16:20
nicococookay: example with 3 classes16:21
nicococow = [w_1;w_2;w_3]16:21
nicococopsi(x,y) = [phi(x); phi(x); phi(x)]16:22
n4nd0I have not clear how to find out the size of each w_i16:22
n4nd0but I think this is because I don't really understand how psi is defined16:23
nicococowell its all linear -> we know the mapping phi16:23
nicococolets assume phi is id mapping : phi(x) = x16:24
nicococothen psi(x,y) = [x;x;x]16:24
nicococoor structured: psi = [x and y=0; x and y=1; x and y=2]16:24
n4nd0ok ... wait a moment here16:25
n4nd0I read that Psi(x,y) = phi(x) x Delta(y)16:25
n4nd0fuck , too many xs...16:26
n4nd0the x between phi and Delta is the tensor product16:26
nicococonot exactly it is psi(x,y) = phi * kronecker-delta, right16:26
n4nd0in the paper it is written something like orthogonal encoding of label y16:27
n4nd0and I got a bit O_O16:27
nicococobig words for small issue :)16:27
n4nd0agree :)16:28
nicococoit is exactly like training 3 linear svms:   f_i(x) = w_i ^T phi(x)   /forall i16:28
nicococoand put all together into a giant vector: f = argmax_i f_i(x)16:30
nicococof(x) = argmax_i w_i^T phi(x)16:31
n4nd0ok but16:32
n4nd0you said psi(x,y) = phi(x) * kronecker-delta16:32
nicococoyes i said this and its bullshit i see :)16:33
n4nd0I guess that the kronecker-delta (let's call it d for short) must depend on y16:33
n4nd0something like d(y)16:33
n4nd0ok, tell me how it depends on it16:34
n4nd0for me a kronecker delta is a vector of zeros and one value equal to 116:34
nicococoin short:  psi(x,y=i) = phi(x)16:34
n4nd0d(y) = 1 if y = 0, 0 otherwise16:34
nicococothe kronecker is to select a certain phi(x) out of the psi-vector: psi(x,y) = [phi(x);phi(x);...]16:35
n4nd0ok, I got the idea16:36
n4nd0so I think I am just missing how to define phi16:36
nicococoohh.. okay thats the usual svm phi.. so we make it linear phi(x) = x16:37
n4nd0aham ok16:37
n4nd0structured loss?16:39
n4nd0correct = 0, incorrect = 1?16:39
nicococoright :)16:39
n4nd0then I have all the pieces16:40
nicococowell it is a debug setting right... we don't want to win a competition ;)16:40
nicococothats something we do afterwards16:41
n4nd0I will work on this within the next few days then16:41
nicocococan you setup the application in python?16:42
nicococo(generating the examples ...)16:42
n4nd0I like C++ but I will do it in python ;)16:42
nicococookay.. then lets talk on thursday again?16:43
nicococoor wednesday is even better16:43
n4nd0so what about if I let you know by mail once I am done with the algorithm so you can visual-debug it?16:43
n4nd0let's make it on Wednesday then16:43
n4nd0how did it go with the NIPS deadline by the way?16:44
n4nd0what are you presenting?16:44
nicococosubmitted the paper 10minutes before deadline :)16:44
n4nd0what about?16:44
nicococoapplication paper some l_1 density level set estimation16:45
n4nd0no idea what that is :O16:45
nicococolike one-class learning16:45
n4nd0that doesn't make sense in my head16:46
n4nd0one-class classification :P16:46
nicococoimagine you have a lot of unlabeled data (for instance network traffic)16:47
nicococoyou know that most of the examples are normal data (userr clicking on website,..)16:47
nicococobut you also know that some of the examples are spurious (hackers)16:47
nicococoa simple setting would be: learn a hypersphere that encloses most of the data16:48
nicococohence, learn a tight bound around a single class of examples excluding examples that deviate from normal behaviour16:49
n4nd0oh, looks interesting16:49
n4nd0machine learning applied to computer security right?16:49
nicococoright ;)16:50
nicococo(but for nips we had some brain-computer interface data)16:50
n4nd0what do you mean with brain-computer?16:50
nicococoeeg-measurements of your brain activity16:51
nicococoquite interesting... but i like structured output more ;)16:52
n4nd0I think the computer security example looks nicer :)16:52
n4nd0what applications of SO do you know beyond CV?16:53
nicococobioinformatics, machine translation16:53
nicococopart-of-speech tagging16:53
nicococospeech recognition16:54
nicococommm... and all upcoming interesting new applications ;)16:54
n4nd0have you seen something applied in communications?16:55
n4nd0I've not found anything yet16:55
n4nd0but I know that in decoding, in wireless communications some things are modelled with HMMs16:56
n4nd0so I guess there must be room to apply SO there16:56
nicococothere are ic that do message passing all the time16:56
n4nd0what is its relation to SO?16:57
nicococoto adjust the parameters (transmission and emission scores )16:57
nicococoi guess there is room for new inventions :)16:58
n4nd0I am interested in re-using my knowledge of SO for my thesis16:58
nicococommhh.. its about communication?16:59
n4nd0I meant like the knowledge I'll have after the project16:59
n4nd0not that much probably but something :P16:59
n4nd0my degree is in telecommunications16:59
nicococowhat is the title?16:59
n4nd0my Spanish one17:00
n4nd0here in Sweden I'm studying robotics17:00
nicococoi also studied communication and robotics :)17:00
nicococo(but forgot everything :( )17:01
n4nd0I should go now!17:01
n4nd0but we talk again on Wednesday, bye!17:01
-!- n4nd0 [] has quit [Quit: leaving]17:01
-!- nicococo [] has left #shogun []17:01
-!- pluskid [~pluskid@] has quit [Quit: Leaving]17:13
-!- uricamic [] has quit [Quit: Leaving.]18:29
-!- emrecelikten [~emrecelik@] has joined #shogun18:35
-!- hoijui [~hoijui@] has joined #shogun18:44
-!- romi_ [~mizobe@] has quit [Ping timeout: 250 seconds]18:48
-!- romi_ [~mizobe@] has joined #shogun18:50
-!- heiko [] has quit [Ping timeout: 256 seconds]18:57
-!- heiko [] has joined #shogun19:02
-!- wiking [] has joined #shogun19:19
-!- wiking [] has quit [Changing host]19:19
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun19:19
-!- gsomix [~gsomix@] has joined #shogun19:23
gsomixI passed theoretical mechanics exam \(^_^)/19:23
-!- flxb [] has quit [Ping timeout: 260 seconds]19:27
-!- blackburn [~blackburn@] has joined #shogun19:35
gsomixblackburn, yo19:36
blackburnwiking: gsomix: weekly reports please ;)19:36
gsomixgsomix, aha19:37
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]19:37
gsomixhuh :)19:37
blackburnjust in time19:37
-!- heiko [] has left #shogun []20:09
-!- cronor [] has joined #shogun20:20
-!- hoijui [~hoijui@] has quit [Quit: Leaving]20:23
-!- gsomix [~gsomix@] has quit [Quit: Ex-Chat]20:58
shogun-buildbotbuild #250 of nightly_none is complete: Failure [failed compile]  Build details are at
-!- n4nd0 [] has joined #shogun21:16
shogun-buildbotbuild #259 of nightly_all is complete: Failure [failed compile]  Build details are at
-!- heiko [] has joined #shogun21:21
CIA-9shogun: Soeren Sonnenburg master * rfc9099c / src/shogun/labels/Labels.h : fix documentation error -
CIA-9shogun: Soeren Sonnenburg master * ref62fca / (src/shogun/lib/SGVector.h src/shogun/lib/SGVector.cpp): split up SGVector into .cpp / .h file and only enable it for numerical types -
CIA-9shogun: Soeren Sonnenburg master * reac5249 / (4 files in 2 dirs): drop GMM - it abuses SGVector to store Gaussians -
CIA-9shogun: Soeren Sonnenburg master * rba6bb44 / src/shogun/so/StructuredModel.h : fix error in formula -
shogun-buildbotbuild #170 of nightly_default is complete: Failure [failed compile]  Build details are at
n4nd0sonney2k: weren't you on holidays? :D21:35
blackburnhmm drop gmm21:37
n4nd0blackburn: what's it?21:38
n4nd0or was it ...21:39
blackburngaussian mixture models21:39
CIA-9shogun: Heiko Strathmann master * rf6951d0 / (2 files): -finishing touches to the spectrum based null-distribution sampling -
CIA-9shogun: Heiko Strathmann master * r8836c0a / (2 files): Merge pull request #566 from karlnapf/master -
shogun-buildbotbuild #938 of cmdline_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org22:06
n4nd0blackburn: btw, let me ask you something22:09
n4nd0general SVM theory :)22:09
n4nd0the optimization problem for SVMs is convex22:10
n4nd0even when we are talking about soft margin22:10
n4nd0with slack variables and so on22:10
n4nd0then it exists a global minima of the function we seek to optimize22:11
shogun-buildbotbuild #917 of r_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org22:11
blackburnyes, while problem is convex22:11
blackburnthere should be a theorem I think22:11
n4nd0so all the different SVM solvers should give the same output right?22:11
blackburnwell due to numerical stuff not really the same22:12
n4nd0it is their efficiency what differs?22:12
blackburnbut yes they should mostly coincide22:12
n4nd0but does that turn to be a big difference?22:12
blackburnno I don't think it is big22:12
n4nd0I find that almost magical22:12
n4nd0that there are a bunch of different algorithms out there giving the same solution!22:13
blackburnbut formulation is the same22:13
n4nd0out there = inside here in shogun :D22:13
n4nd0I find it funny, don't you?22:14
blackburnwell.. just different way to obtain solution22:14
blackburnjust like solving nonlinear equations22:14
blackburnn4nd0: about convexity22:15
blackburnimagine polygon in R^222:15
shogun-buildbotbuild #904 of python_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org22:15
blackburnand you want to find a point nearest to origin22:15
blackburncan you stuck somewhere if it is convex?22:16
n4nd0with polygon are you thinking of a closed curve?22:17
n4nd0something like an U closed in the upper part22:17
blackburnlinear piecewise22:17
n4nd0then I guess you could get stuck in the corners22:18
blackburnany area with linear piecewise boundary22:18
blackburncan there be a case with corner which are nearer to origin than its neighbors?22:19
blackburnwhich is*22:19
n4nd0mmm I think that depends on where the curve is22:20
n4nd0you can put it in such a way that one of the corners is the origin22:20
blackburnlike that?22:20
n4nd0that is not convex22:20
n4nd0is it?22:20
blackburnthat's what I mean22:20
blackburnnot convex - so if convex you always can travel for better corner22:21
blackburnand you will never be at local minima22:21
n4nd0nice property of convexity22:21
blackburnbtw let me ask you now22:22
blackburnis SO svm convex?22:22
blackburnn4nd0: btw did you check logs where we were discussing VC capacity with heiko? :)22:22
n4nd0blackburn: no I didn't, was it today?22:23
blackburnhard to remember.. probably or yesterday22:23
n4nd0blackburn: it must be convex, it is reduced to a QP22:23
blackburnn4nd0: I became so interested with relations between Popper's falsifiability and VC dimension22:23
n4nd0I don't know if that is a good answer though...22:23
blackburnI even asked my former philosophy teacher about that :D22:24
blackburnhowever I had to describe what VC is to him22:24
shogun-buildbotbuild #827 of octave_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org22:24
heikon4nd0 check out the boyd and vanderberghe book22:24
heikothats amazing, I recently bought it and its a pleasure to read22:24
blackburnah I awakened heiko22:25
heikocovers all the convexity/SVM/Lagrange stuff22:25
heikoyes :)22:25
heikoI am listeing all the time :)22:25
n4nd0heiko: yeah, I started with it :)22:25
blackburnwhy did you buy it?22:25
n4nd0I am following it together with the lectures in youtube22:25
heikoI also got some nice notes which are a good summary of the stuff you need for SVM22:25
heikoblackburn, well I like books :)22:25
blackburnI prefer to read using my kindle dx - so awesome22:26
heikon4nd0,  try lecture 7 of these:
heikoI like paper :)22:26
heikoalso, you can defend yourself with heavy books in contrast to kindle :)22:26
heikooh food is ready22:26
heikosee you later!22:26
n4nd0heiko: bye, and thanks!22:27
blackburnsee you22:27
n4nd0I am an old guy in that sense too, I prefer the paper22:27
n4nd0I have not tried kindle though22:27
n4nd0I've heard is comfortable to read with it too22:27
blackburndx is the first thing I bought with gsoc money :D22:30
blackburnlast year I mean22:31
blackburnn4nd0: I'll start with boyd book after my thesis I think22:34
blackburnI know *a little* and need to systemize22:34
n4nd0blackburn: cool22:37
n4nd0blackburn: we can talk about our respective progress22:37
blackburnin cvxopt?22:37
heikon4nd0, blackburn, re22:41
heikoI just had the exam about it, had to derive one-class nu-svm with soft margin :)22:42
heikoI also wanna get more into the stuff, very interesting, so lets talk about it at some point22:42
blackburnI talked about VC dim for an hour today with philosopher :D22:45
-!- zxtx [] has quit [Read error: Connection reset by peer]22:50
-!- zxtx [] has joined #shogun22:59
-!- romi_ [~mizobe@] has quit [Ping timeout: 245 seconds]23:18
-!- romi_ [~mizobe@] has joined #shogun23:18
-!- emrecelikten [~emrecelik@] has quit [Quit: Leaving.]23:24
CIA-9shogun: Heiko Strathmann master * r4400786 / src/shogun/statistics/QuadraticTimeMMD.h : added comment -
CIA-9shogun: Heiko Strathmann master * r2fbbd7a / (2 files): added inverse_incomplete_gamma for computing the CDF of the -
CIA-9shogun: Heiko Strathmann master * r816d12d / examples/undocumented/libshogun/statistics.cpp : added test to ensure high-quality results for new alglib routines -
CIA-9shogun: Heiko Strathmann master * r0d707d8 / (4 files in 3 dirs): Merge pull request #567 from karlnapf/master -
blackburnheiko: so now you are completely free?23:25
heikoblackburn, not yet :)23:25
heikoone more23:25
heikoanother harder one23:25
heikoso tomorrow all day studying23:25
blackburnwhat is it is about?23:25
heikogeneral machine learning algos23:26
heikomath, FA, GMM, HMM, ICA, LDS, optimisation, sampling, dimensionality reduction23:26
blackburnoh wtf weren't ones you had about ML?23:26
heikomathematical basics of them23:26
heikowhat? :)23:26
n4nd0heiko: you take really cool courses23:27
blackburnyou had exams about ML ML ML and ML23:27
heikon4nd0, thanks :)23:27
heikoI mean thats what I am studying23:27
heikoand why I cam ehere23:27
blackburnwhat is LDS??23:27
heikoin Germany I would have to take all this CS crap :)23:27
heikolinear dynamical systems23:27
blackburnno idea23:27
heikokalman filter, auto-regressive models etc23:27
blackburnwhy is it ml23:28
heikopretty important and useful stuff23:28
blackburnyes I know kalman and ARMA models23:28
heikoI mean this is all related to Bayesian methods23:28
blackburnkalman applied to bayesian inference?23:28
heikokalman filter has a bayesian interpretation23:28
blackburnoh gosh23:29
heikobtw the guy who did the lecture is david barber23:29
heikohe has a really nice book23:29
blackburnwhich dim reduction algos do you study?23:29
heikogreat book about ML and bayesian, theres nearly everything in it23:29
heikowe did mostly PCA23:30
heikobut in quite detail23:30
heikoalso how to use it for matrix completition etc23:30
blackburnI really think Isomap is must to know23:30
heikoIll put it on my list :)23:31
heikocurrently a bit short on time23:31
heikoblackburn, n4nd0, this course here was really great and I reccomend it without any doubts :)23:32
blackburntoo much studies for me23:33
heikoyeah, its kind of a lot of work23:35
heikobut London is a lovely place to be :)23:35
heikoI will go offline now, guys, have a good evening :)23:35
blackburnsee ya23:36
--- Log closed Tue Jun 05 00:00:41 2012