Open in new window / Try shogun cloud
--- Log opened Fri Apr 20 00:00:19 2012
-!- emrecelikten [~emre@] has joined #shogun00:08
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 248 seconds]01:04
-!- emrecelikten [~emre@] has quit [Quit: Leaving.]01:16
-!- emrecelikten [~emre@] has joined #shogun01:17
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun01:30
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]02:11
-!- av3ngr [av3ngr@nat/redhat/x-svsxtaryupzxrgvv] has joined #shogun02:13
-!- PhilTillet [~Philippe@] has quit [Quit: Leaving]02:42
-!- emrecelikten [~emre@] has quit [Quit: Leaving.]03:13
-!- xiangwang [~chatzilla@] has joined #shogun05:15
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 244 seconds]05:21
-!- n4nd0 [] has joined #shogun07:05
-!- n4nd0 [] has quit [Quit: leaving]07:10
CIA-64shogun: Evgeniy Andreev master * rb919bef / examples/undocumented/python_modular/ : fixed regression gaussian process example for python3 support -
CIA-64shogun: Soeren Sonnenburg master * rd466587 / examples/undocumented/python_modular/ :07:59
CIA-64shogun: Merge pull request #473 from gsomix/python3_interface07:59
CIA-64shogun: Fixed regression gaussian process example for python3 support -
CIA-64shogun: Soeren Sonnenburg master * r8b0a9d7 / (74 files in 11 dirs):08:05
CIA-64shogun: Merge pull request #472 from gsomix/big_deal_with_sgvector08:05
CIA-64shogun: Start with SGVector -> SGVector& migration (+12 more commits...) -
CIA-64shogun: Soeren Sonnenburg master * r39c82e6 / (src/shogun/regression/LARS.cpp src/shogun/regression/LARS.h): fix compile error: make LARS unavailable w/o lapack -
-!- wiking [] has joined #shogun08:55
-!- wiking [] has quit [Changing host]08:55
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun08:55
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]09:02
-!- xiangwang [~chatzilla@] has joined #shogun09:05
-!- uricamic [9320543b@gateway/web/freenode/ip.] has joined #shogun09:06
-!- xiangwang [~chatzilla@] has quit [Client Quit]09:06
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun09:15
-!- xiangwang [~chatzilla@] has joined #shogun09:28
-!- Jiuding [d2198538@gateway/web/freenode/ip.] has joined #shogun09:30
-!- Jiuding [d2198538@gateway/web/freenode/ip.] has quit [Client Quit]09:31
xiangwanghi wiking, pull request made succesfully. Thanks:)09:36
-!- harshit_ [~harshit@] has joined #shogun09:47
wikingxiangwang: no worries09:51
wikinggsomix: yeeeey cool patches!09:51
wikinggsomix: around09:51
-!- harshit_ [~harshit@] has quit [Ping timeout: 260 seconds]09:56
xiangwangok, just keep on working++09:56
-!- n4nd0 [] has joined #shogun10:01
-!- blackburn [5bdfb203@gateway/web/freenode/ip.] has joined #shogun10:06
blackburnok I am finally finished with that shit10:07
blackburnwho is around? wiking? gsomix? sonne|work?10:08
wikingfinished with what shit?10:08
blackburnwiking: passed optics ungraded exam10:09
wikinghahahah congrats10:09
blackburnthat was the most awful exam ever10:09
blackburnI guess10:10
wikingbtw: don't we want to expose the primal objective value in svm?10:10
wikingi would need it...10:10
blackburncomatic aberrations, chromatic aberrations10:10
blackburnseidel aberrations10:10
blackburndenisyuk holography10:11
n4nd0blackburn: I am here around too, I am glad you finish you exam ;)10:11
blackburnn4nd0: uh sorry I missed your nickname in the list10:11
blackburnwiking: I thought it is available10:11
blackburnwiking: do you mean multiclass?10:11
wikingblackburn: noun... i'm talking about svmocas10:11
blackburnwiking: binary?10:12
blackburnsvm base class has primal objective method IIRC10:12
blackburnso it doesn't depend on implementation10:12
wikingmmm where? :)10:13
wikingman i was looking cmachine and clinearmachine api10:13
wikinghaven't seen one...10:13
-!- av3ngr [av3ngr@nat/redhat/x-svsxtaryupzxrgvv] has quit [Quit: That's all folks!]10:14
blackburnwiking: SVM10:14
wikingchecking now CSVM10:14
blackburnhow can that be linearmachine? ;)10:14
blackburnwhat is the objective10:14
blackburnfloat64_t compute_svm_dual_objective();10:15
blackburnthis one10:15
blackburnor primal it is up to you10:15
wikingok then there's only one thing i think10:15
wikingclass CSVMOcas : public CLinearMachine10:15
wikingthis needs to be fixed afaik10:16
wikingand this is why i was checking linear machine ;P10:16
wikingsince i'd like to use ocas for latent10:16
blackburnI see10:17
wikingso i guess before anything10:17
wikingi should do the porting of svmocas to csvm10:17
blackburnno way!10:17
wikingor use libocas straight away10:17
blackburnwiking: it shouldn't be csvm10:17
blackburncsvm is a kernel machine10:18
wikingah so there's no linear svm api10:18
blackburnyes exactly10:18
wikingok i see now10:18
blackburnI am afraid we would need that10:18
wikingso we would need CSVM and CKERNELSVM10:18
wikingor seomthing like this10:18
wikingor CSVM and from that have a CKERNELSVM inherited10:18
blackburnjust insert CLinearSVM between CLinearMachine and CSVMOcas10:18
blackburnno I don't think we need to rename basic svm10:20
wikinganyways we need a standard api for linear svm10:20
wikingwhere one can get access to the primal/dual objective value10:20
blackburnthings getting tougher w/o multiple inheritance10:20
blackburnwiking: actually even computing routines are different10:20
blackburncomputing primal of linear svm costs nothing10:20
blackburnwould be really fast I mean10:20
wikingyeah it's only that currently you do not have access to those values ;P10:20
blackburnwiking: just add it is easy ;)10:20
wikingi mean i could have just hacked now CSVMOcas and expose the values in it10:20
wikingbut i thought it's better to do something more general10:21
wikingand since now i've realized that we don't have a standard api for linear svm10:21
blackburnyes CLinearSVM sounds legal for me10:21
wikingok i'll make one10:21
blackburnsonne|work: CLinearSVM10:21
wikingheheh again a branch switch10:21
wikingi looooove git stash :D10:22
blackburn#optimization, #shogun, #svms10:22
blackburnthat should take his attention10:22
wikingblackburn: we've agreed yesterday10:23
wikingto have a shogun/optimizer folder10:23
wikingwith all the qp solvers being there10:23
blackburnI shall read the logs10:23
wikingonly thing that sonne|work isn't sure yet what should we do about tron and pr_loqo10:23
n4nd0wiking: what is tron exactly? I have not found that much doc within the code10:28
n4nd0and google search gets other hits by tron as you can imagine :)10:28
blackburnn4nd0: some optimizer10:28
wikingblackburn: ah yeah forgot to mention. would u mind if i change the doxygen config that the inherited methods are as well present in the documentation of a class?10:33
blackburnI don't but no idea what sonne|work would say10:33
wikingok well u guys will decide about the pull request :P10:33
blackburnI am actually thinking10:34
blackburnwe should spend some week in may10:34
blackburnon doc10:34
blackburnkind of doc sprint heh10:34
-!- Marty28 [] has joined #shogun10:38
blackburnn4nd0: do you like it?10:42
blackburnwiking: and you?10:42
n4nd0doc improvement?10:42
n4nd0I might be kind of boring but it is probably necessary... so yes I like it10:43
blackburnno we can make it fun10:45
blackburnn4nd0: barack obama says we can10:45
n4nd0haha all right10:45
n4nd0but not how, looks fishy10:46
blackburnn4nd0: be a mathematician10:46
blackburnexistence is everything10:46
wikingblackburn: most definitely would be up for it10:47
-!- Marty28 [] has quit [Quit: Colloquy for iPad -]10:49
blackburnwhat about middle of may?10:49
n4nd0after the 7th it is good for me10:50
wikingmm i'm away between 13-20 and 24->3110:50
wikingotherwise it's good10:50
-!- gsomix [~gsomix@] has quit [Read error: Operation timed out]10:51
blackburnwiking: is it serious you will be in iceland or so?10:52
wikingblackburn: in july yes10:52
wikingblackburn: but there i'll be 24/7 available10:52
wikingin may i have conferences at those dates... that's why i cannot really work10:52
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 265 seconds]10:52
blackburnI see10:52
shogun-buildbotbuild #500 of ruby_modular is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org10:53
-!- gsomix [~gsomix@] has joined #shogun10:55
sonne|workwiking, blackburn please no CLinearSVM10:56
blackburnsonne|work: why?10:56
wikingsonne|work: ok what then?10:56
blackburnsonne|work: where to place helpers like objective stuff?10:57
sonne|workblackburn: in LinearMachine directly10:57
blackburnsonne|work: any motivation why we should avoid CLinearSVM?10:57
wikingsonne|work: so i should then reimplement ocas optimizer if i want to get access to objective value?10:57
sonne|workblackburn: what would you derive liblinear from?10:57
blackburnsonne|work: ehm CLinearSVM?10:58
sonne|workit is logistic regression, linear programming machine, support vector regression *and* linear svm10:58
sonne|workall in one thing10:58
wikingyep true that10:59
blackburnah you're right10:59
blackburnsonne|work: do you like doc sprint idea?10:59
sonne|workI am certainly happy if you all write doc :D11:00
wikingehehhe so ok11:00
wikingshould we then just add a new function to linear machine11:00
blackburnsonne|work: it would need your help too ;)11:00
wikingthat is 'optional'?11:00
blackburnespecially for legacy stuff11:01
sonne|workwiking: yes - even implement it directly in there11:01
sonne|workblackburn: why that?11:01
sonne|workthen it is not good :P11:01
blackburnsonne|work: I am not able to doc some POIMs (wtf it is actually?), alignment kernels blabla11:01
* sonne|work hates writing docs - and yes lots if not most of the doc was written by me11:01
wikingsonne|work: directly there? i mean it depends on the solver whether i have access to it at all...11:02
sonne|workwiking: I thought you wanted to compute objective?11:02
sonne|worksvm objective is pretty clear if you have training data, w and b11:02
sonne|workand C11:02
wikingyes but only svm has it11:02
blackburnwiking: hm but why not to compute dual objective of LDA's w?11:03
blackburnwould be interesting haha11:03
sonne|workok then - float64_t compute_primal_objective() { SG_NOTIMPLEMENTED; }11:03
wikingsonne|work: yep that's what i thought11:03
wikingthat have an pure virtual function11:03
wikingin linear machine11:03
wikingand if we can have such11:03
wikingwe supply one11:04
wikingif not then it's notimplemented11:04
sonne|worknot pure virtual but virtual yes11:04
wikinghehe ok11:04
blackburnokay lets schedule some doc madness in the second half of may11:07
wikingok can i ask a naive question11:09
wikinglike in the case of CSVMOcas11:10
wikingwhy the svmocas related variables are protected?11:10
wikingare we planning ahead that maybe one day somebody will do an inheritance of it where he/she wants access to those variables11:11
wikingbut i mean this is true for a lot of machines..11:11
blackburnwiking: but they are protected so what is the problem?11:12
wikingok other way to formulate my question: why aren't they private11:12
blackburnsorry I misread your question as statement :)11:13
blackburnyes it makes sense to provide such extensibility I think11:13
-!- PhilTillet [] has joined #shogun11:14
PhilTilletHello everybody11:14
PhilTilletI'm finally in spring break for 1 week yeee11:15
wikingPhilTillet: good on you11:15
wikingenjoy it ;)11:15
PhilTilletI won't get bored :p a lot of work11:16
PhilTilleti'm kind of disappointed by what I do at school so I end up having to do a lot of work on my own :p11:18
-!- xiangwang [~chatzilla@] has joined #shogun11:20
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 276 seconds]11:26
shogun-buildbotbuild #520 of csharp_modular is complete: Failure [failed test_1]  Build details are at  blamelist:, gsomix@gmail.com11:27
blackburnnice patch11:28
shogun-buildbotbuild #533 of octave_modular is complete: Failure [failed test_1]  Build details are at  blamelist:, gsomix@gmail.com11:34
-!- xiangwang [~chatzilla@] has joined #shogun11:35
-!- PhilTillet [] has quit [Ping timeout: 248 seconds]11:40
shogun-buildbotbuild #509 of lua_modular is complete: Failure [failed test_1]  Build details are at  blamelist:, gsomix@gmail.com11:40
shogun-buildbotbuild #510 of java_modular is complete: Failure [failed test_1]  Build details are at  blamelist:, gsomix@gmail.com11:51
-!- PhilTillet [~Philippe@] has joined #shogun11:52
shogun-buildbotbuild #506 of python_modular is complete: Failure [failed test_1]  Build details are at  blamelist:, gsomix@gmail.com11:58
sonne|workgsomix: you need to adjust all typemaps to work with SGVector& too12:02
gsomixsonne|work, ok. I need a little time. I have just returned.12:06
sonne|workgsomix: and of course changing typemaps when there are still some SGVector's without & around will still cause breakage - so maybe better just copy typemaps for now to have a SGVector& and SGVector variant and then slowly proceed migrating them12:11
gsomixsonne|work, got it.12:17
sonne|workgsomix: all the other things I have in mind are piece of cake compared to this transition12:22
sonne|workmost intrusive so should be done first and will trigger lots of bugs12:22
-!- sonne|work [~sonnenbu@] has left #shogun []13:17
-!- xiangwang_ [~chatzilla@] has joined #shogun13:21
-!- xiangwang_ [~chatzilla@] has quit [Quit: ChatZilla [Firefox 9.0.1/20111220165912]]13:21
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 272 seconds]13:36
-!- sonne|work [~sonnenbu@] has joined #shogun13:49
blackburn*pleeease turn on conditioners*13:50
PhilTilletblackburn, what about turning on the preconditioners?13:50
blackburnPhilTillet: I don't mind if any preconditioner is able to cool the room heh13:51
PhilTilletpreconditioners are cool13:52
PhilTilletor maybe you can put a preconditioner on your conditioner13:54
PhilTilletso that the temperature converges faster13:54
blackburnI hope it would converge to some minus this time13:55
PhilTilletBetter if the problem is convex too13:56
blackburnI'll become convex if temperature in this room stays the same or it becomes hotter13:57
PhilTilletn4nd0,  yes we're a bit crazy13:59
n4nd0what the hell man, today it shows in Sweden and you in Russia are warm13:59
n4nd0no way13:59
n4nd0not shows13:59
blackburnI'm dying because of the hell here14:00
n4nd0it has snowed today here14:00
blackburnit should be +3014:00
PhilTilletis sweden that far from Russia?14:01
blackburnrussia is far from russia14:01
blackburnhmm +2414:01
blackburnbut feels like hell14:01
blackburnthey broke conditioners in the office14:02
-!- xiangwang [~chatzilla@] has joined #shogun14:03
n4nd0see you later guys14:04
gsomixsonney2k, sonne|work around?14:06
-!- n4nd0 [] has quit [Ping timeout: 248 seconds]14:08
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 245 seconds]14:09
-!- xiangwang [~chatzilla@] has joined #shogun14:23
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 260 seconds]14:29
-!- n4nd0 [] has joined #shogun14:30
gsomixblackburn, wut? today is cold, desu14:31
blackburngsomix: cold? you should visit netcracker office with turned off conditioners14:31
-!- xiangwang [~chatzilla@] has joined #shogun14:32
blackburnhowever it is way too warm outside as well14:35
gsomixblackburn, hmm. I'm in my room and I'm cold, hehe14:36
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 245 seconds]14:37
blackburnare you cold?14:37
PhilTilletblackburn, I'm kind of cold too14:38
-!- xiangwang [~chatzilla@] has joined #shogun14:39
blackburnas for me I want to sleeeep :)14:40
PhilTilletme too, but i'm reading some svm stuffs :p14:41
PhilTilletdoes Shogun use SMO for training SVMs?14:41
blackburnPhilTillet: at some point14:42
blackburnlibsvm is some kind of14:42
PhilTilletyes I read it was based on that14:42
PhilTilletbut not exactly the same14:42
blackburnin the shogun we just incorporate different solver14:44
blackburnwith some improvement along the way to merge14:44
PhilTilletcause i've read several papers on parallelizing svm, and most of the time they just use SMO for learning14:47
-!- pluskid [~pluskid@] has joined #shogun14:52
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 265 seconds]14:52
-!- n4nd0 [] has quit [Quit: leaving]15:00
-!- harshit_ [~harshit@] has joined #shogun15:04
-!- blackburn [5bdfb203@gateway/web/freenode/ip.] has quit [Quit: Page closed]15:04
gsomixtime to compile15:05
* gsomix afk15:05
-!- Marina_ [55b38cc8@gateway/web/freenode/ip.] has joined #shogun15:16
-!- xiangwang [~chatzilla@] has joined #shogun15:17
Marina_@sonney2k: if i try to train a svm on the real spice data, I get the error:  km = kernel.get_kernel_matrix(); SystemError: Out of memory error, tried to allocate 20000000000000000 bytes using malloc.15:18
Marina_isn't it possible to train with so many datas?15:19
PhilTilletMarina_, isn't that 2 e16 bytes? :p15:22
-!- harshit_ [~harshit@] has quit [Remote host closed the connection]15:22
Marina_the dna_train.dat is 9.814.454 KB15:23
-!- xiangwang [~chatzilla@] has quit [Ping timeout: 260 seconds]15:23
Marina_or what do you mean?15:23
sonne|workMarina_: use a small subset of the data15:30
Marina_@sonney2k how many data should I use for the subset? and should I load them first all into the programm or is it possible to load only a subset from the file?15:36
sonne|workMarina_: start with very few (say 1000) and then gradually take more15:41
sonne|work100k is probably enough to show anything of interest15:42
Marina_@sonne2k so i load them all with lm.load_dna('real_data/dna_train.dat') and then take the first 100015:43
sonne|workload just a few15:45
Marina_but how do I do so?15:45
sonne|workx=file('my_file.dna').readlines()[:100] ?15:46
sonne|workdo it in whatever language you progam in15:47
-!- harshit_ [~harshit@] has joined #shogun15:47
harshit_hello everyone :)15:48
harshit_sonney2k, here ?15:48
-!- n4nd0 [] has joined #shogun15:50
n4nd0sonne|work: hey! I got some time to be in shogun mode :)15:54
n4nd0sonne|work: do you want me to try first if MulticlassKernel is ok now after Heiko's work?15:55
n4nd0sonne|work: or should I continue with cover tree??15:55
sonne|workn4nd0: I am most interested in the timing experiment15:55
sonne|workharshit_: yes15:55
harshit_sonne|work, got to know form logs that svr is released ..! Are you working on it ?15:56
harshit_I am asking coz I want to do that15:56
gsomixsonne|work, ok, it compiled. python_modular, ruby_modular, octave_modular are fine now.15:57
n4nd0sonne|work: using CBlas function then right? I don't have that clear if we concluded whether CBlas or the decomposition (x-y)^2 = x^2 + y^2 -2?x?y is preferable15:57
gsomixsonne|work, i need to go. cu little later15:57
sonne|workn4nd0: cblas has no norm(x,y) function15:58
sonne|workn4nd0:  so try the decomposition using blas15:58
n4nd0sonne|work: let me look in the logs, I remember blackburn suggested to use a cblas function16:00
sonne|workn4nd0: dnrm216:00
n4nd0sonne|work: yeah exactly16:00
sonne|workn4nd0: that doesn't help, will only do ||x||16:01
sonne|worknot ||x-y||16:01
sonne|workharshit_: I started already - but what you are trying to do (SVMLin) is also nice!16:01
n4nd0sonne|work: I could comptue the difference vector first16:02
sonne|workwhich is already slower than doing x^T y in the decomposition :)16:02
sonne|workand blas16:02
PhilTilletHa, now I understand why GaussianKernel is a DotKernel and not a distance kernel :p16:05
sonne|workPhilTillet: yeah nice trick eh?16:05
PhilTilletyes :D16:06
PhilTilleton GPU, norm2(x-y) is faster :p this was why I was facing a little design issue for my patch16:06
sonne|workPhilTillet: why that?16:07
sonne|workI mean if you already hav ||x||^2 and ||y||^2 precomputed16:07
sonne|workx^T y cannot possible slower or?16:07
PhilTilletyes, if you have these two precomputed16:07
PhilTilletindeed we have16:07
sonne|workyeah of course you need to precompute these16:08
sonne|workotherwise -> slow16:08
PhilTilletthe point is that i was still with these matrix stuff, I did some benchmark replacing dot product with dot on GPU16:08
PhilTilletbut well, it is pointless I think16:08
PhilTilletgpu starts to benefit for blas1,2 for vector size > 100 00016:09
PhilTilletand I guess we rarely have features of dimension more than 100 000 :p16:09
PhilTilletwell, and I was thinking, maybe the cleanest thing is to make some GPUKernels, like GPUDotKernels, etc...16:12
sonne|workPhilTillet: maybe it is really only possible if this is done for a specific kernel, specific features + kernel machine16:15
sonne|workso GaussianKernelMachine16:16
sonne|workworking only on 'simple' features16:16
PhilTilletCould be possible with sparse features too16:16
PhilTilletbut I agree that for online learning, it's probably not efficient16:17
sonne|workhow could that work with sparse features?16:17
sonne|workisn't that random memory access then again?16:17
PhilTilletHmm, it's possible to do sparse linear algebra on GPU :p16:18
PhilTilletI guess what's important is to have all the features at the beginning so you can cache them efficiently16:19
PhilTilletI've seen a paper with SMO on GPU where they use chunking because the kernel matrix is too large to fit in memory16:20
PhilTilletthey had 50x speedup on learning compared to libsvm ><16:25
-!- harshit_ [~harshit@] has quit [Ping timeout: 276 seconds]16:25
-!- uricamic [9320543b@gateway/web/freenode/ip.] has quit [Quit: Page closed]16:34
sonne|workPhilTillet: yeah sure that makes sense16:38
sonne|workPhilTillet: actually chunking methods like svmlight precompute a part of the kernel matrix in each iteration (about 40 kernel rows)16:39
PhilTillethmm, interesting16:39
-!- Marina_ [55b38cc8@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]16:42
-!- blackburn [5bde8018@gateway/web/freenode/ip.] has joined #shogun16:47
-!- blackburn [5bde8018@gateway/web/freenode/ip.] has quit [Quit: Page closed]16:56
gsomixsonney2k, sonne|work java_modular is ok now.17:22
-!- harshit_ [~harshit@] has joined #shogun17:23
-!- PhilTillet [~Philippe@] has left #shogun ["Leaving"]17:31
-!- pluskid [~pluskid@] has quit [Quit: Leaving]17:40
-!- harshit_ [~harshit@] has quit [Remote host closed the connection]17:46
-!- xiangwang [~xiangwang@] has joined #shogun18:03
n4nd0sonne|work: there is no huge speedup using the decomposition :(18:33
n4nd0this is the way I am computing it, just in case I got the idea wrong18:33
-!- emrecelikten [~emre@] has joined #shogun19:08
-!- xiangwang [~xiangwang@] has quit [Remote host closed the connection]19:14
@sonney2kgsomix, ok then I better merge things :)19:43
@sonney2kn4nd0, no wonder19:44
@sonney2kn4nd0, you should precompute x^T x19:44
@sonney2kand store these constants19:44
CIA-64shogun: Soeren Sonnenburg master * r48b3ffd / (42 files in 7 dirs):19:46
CIA-64shogun: Merge pull request #458 from pluskid/multiclass-refactor219:46
CIA-64shogun: Refactoring of multiclass (+11 more commits...) -
CIA-64shogun: Soeren Sonnenburg master * rd6cc3eb / (7 files in 7 dirs):19:47
CIA-64shogun: Merge pull request #474 from gsomix/big_deal_with_sgvector19:47
CIA-64shogun: add SGVector& typemaps for 'smooth' transition (+7 more commits...) -
n4nd0sonney2k: to compute x^T x the one I am doing it right now with cblas_ddot(xlen, x, 1, x, 1) is fine or not?19:50
@sonney2kn4nd0, just use CMath::dot() (which effectively does this)19:50
n4nd0the way instead of the one ^19:50
n4nd0all right19:51
n4nd0do you expect a major improvement storing it?19:51
@sonney2kn4nd0, of course19:51
@sonney2kfactor 3 faster then19:51
n4nd0wait a moment19:52
@sonney2kn4nd0, btw I think you should compare this to JL's covertree distance19:52
shogun-buildbotbuild #754 of libshogun is complete: Failure [failed compile]  Build details are at  blamelist: pluskid@gmail.com19:52
@sonney2kours should be faster then19:52
n4nd0with precomputing x^T x19:53
n4nd0this is for the whole training data right?19:53
@sonney2kyes, going once over all data19:53
n4nd0I mean for each training vector x store x^T x19:54
@sonney2khave a look at CGaussianKernel19:54
@sonney2kit does the same19:54
n4nd0ok, I will check it19:54
@sonney2kI guess you can even just copy the code19:54
n4nd0but I thought in any case that in shogun we normally assume that these type of matrices do not fit into memory19:54
@sonney2kdammed buildbot :D19:54
@sonney2kn4nd0, most algorithms assume that data fits in memory19:55
@sonney2kbut storing a scalar per vector is not really a lot even if not19:55
n4nd0in the same way it is done for the training data19:57
n4nd0it is good to do it for the query points19:57
@sonney2kyou have to do it for lhs and rhs19:58
n4nd0got it19:58
n4nd0thank you for the pointer to CGaussianKernel :)19:58
CIA-64shogun: Soeren Sonnenburg master * r732f288 / src/shogun/classifier/svm/SVM.h : add missing friend 'class' Foo to fix compile error -
@sonney2kn4nd0, let me know whether JLs distance is faster or ours now - and if ours is faster our KNN+covertree should be too19:59
@sonney2kshogun-buildbot, be happy!20:04
shogun-buildbotWhat you say!20:04
emreceliktensomeone set up us the bomb20:04
@sonney2kshogun-buildbot, you are the bomb!20:05
shogun-buildbotbuild #756 of libshogun is complete: Success [build successful]  Build details are at
shogun-buildbotWhat you say!20:05
gsomixsonney2k, ok. I'm waiting for shogun-buildbot judgement.20:06
shogun-buildbotbuild #686 of r_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org20:09
CIA-64shogun: Soeren Sonnenburg master * ra6d10e0 / src/shogun/machine/MulticlassMachine.cpp : fix compile warnings -
shogun-buildbotbuild #708 of cmdline_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org20:13
CIA-64shogun: Soeren Sonnenburg master * r36b4b6e / examples/undocumented/libshogun/classifier_mklmulticlass.cpp : fix include MultiClass -> Multiclass -
@sonney2kgsomix, it will take some time - have to cleanup pluskids bugs :)20:14
@sonney2kgsomix, I really hope we can make some fast progress on this sgvector business20:15
@sonney2kthis and some reshuffling into lib/external / multitask / multiclass are the big changes20:15
@sonney2kall the rest are just incremental feature updates20:15
@sonney2knot so likely to break the whole build20:15
CIA-64shogun: Soeren Sonnenburg master * r53cc701 / examples/undocumented/libshogun/classifier_libsvmmulticlass.cpp : another MultiClass -> Multiclass include fix -
shogun-buildbotbuild #687 of octave_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org20:17
gsomixsonney2k, I also hope so.20:18
@sonney2kshogun-buildbot, stinky-finger!20:18
shogun-buildbotWhat you say!20:18
gsomixsonney2k, I have some misunderstanding with reference counting. can we discuss these issues a little later?20:20
@sonney2ksure but first the SGVector& business20:20
shogun-buildbotbuild #674 of python_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org20:21
gsomixsonney2k, ok. I need to go now. cu, later for discuss future plan.20:23
@sonney2kgsomix, cu20:23
* sonney2k waits for the buildbot to send out love and happiness again20:24
shogun-buildbotbuild #215 of nightly_none is complete: Success [build successful]  Build details are at
-!- genix [~gsomix@] has joined #shogun21:26
-!- gsomix [~gsomix@] has quit [Ping timeout: 276 seconds]21:30
-!- genix is now known as gsomix21:38
gsomixwiking, around21:38
gsomixwiking, did you wanted to tell me smth? I'm slowpoke - missed your message.21:46
wikingah yeah21:46
wikingi think maybe blackburn already told you about a little change we'll need to do in Features.h21:46
wikingand in all the inherited classes21:46
gsomixwiking, const methods?21:47
wikingok cool so you know about it \o/21:47
-!- n4nd0 [] has quit [Quit: leaving]21:48
* gsomix afk21:48
wikingthnx in advance for the changes!21:49
wikingnow it's time for the night session developing...21:53
gsomixit's time to homework for me.22:02
@sonney2kalright the deduplication meeting is over and everyone of you who was in before is still in :D22:33
wikingnow the only question is22:34
wikingwho's in22:34
wikingbut i guess we'll only know this next week22:34
CIA-64shogun: Soeren Sonnenburg master * rac7a60e / (2 files): fix examples MultiClass -> Multiclass -
wikingthe we'll only know it next monday between 21:00 and 22:00 CEST22:36
CIA-64shogun: Soeren Sonnenburg master * rfa27ebf / (16 files in 9 dirs): MultiClass -> Multiclass replacement in all remaining examples -
wikingsonney2k: so there's gonna be a structural-svm gsoc project?22:37
wikingok cool22:37
@sonney2khmmhh pluskid really did lots of changes causing all MC examples to fail22:38
@sonney2ksimple changes are missing but I guess I never said that one has to compile for all interfaces and run make check-examples22:39
@sonney2kanyway it will be over soon22:39
@sonney2kwiking, anyway 8 students is a lot and I hope you will all have fun :)22:40
wikinghahaha so should i understand the implied meaning here ? ;)22:40
@sonney2kI am not implying anything22:41
wikingah ok22:41
emreceliktenI sadly could not find time for doing the patch requirement for this project22:46
emreceliktenMaybe I will work on something on my own if none of my proposals get accepted in general22:47
emreceliktenI still have that HMM project in my mind :D22:47
@sonney2kemrecelikten, you mean continous output hmms?22:49
emreceliktenYep, Gaussian mixtures22:49
emreceliktenI am a text-to-speech guy22:49
emreceliktenIt's used a lot22:49
@sonney2kwould be a very welcome contribution... maybe you even find C/C++ code for that already22:50
@sonney2khuh? someone has withdrawn his proposal?!22:50
emreceliktenI think there are various tools, but performance is quite important I think22:50
emreceliktenBaum-Welch takes a lot of time22:50
@sonney2kgot to leave train22:50
wikingsonney2k: mmm wasn't it 9 slots?22:51
emreceliktenWhat are the projects of you guys?22:55
emreceliktenI know that Fernando applied to structured SVM outputs22:55
gsomixemrecelikten, "Many changes, improvements, bugs, lolz and vodka for Good!"22:57
wikinggsomix: the best project \o/22:58
wikingemrecelikten: i've applied for latent svm22:58
emreceliktenVodking all the time then huh?22:58
emreceliktenToo bad that I'm a straight edge :P22:59
emreceliktenwiking: Nice. I applied for structured SVM output and C5.022:59
wikingyeah structured svm would be cool to have22:59
emreceliktenBut since I haven't sent a patch, I don't think I have a chance22:59
wikingwell i've send a lot of patches23:00
wikingbut not much related to latent svm yet :(23:00
wikingthat is still under progress23:00
emreceliktenI think it's fine, you're supposed to work on it later23:00
wikingi might have something tonight23:00
emreceliktenIf you could do something serious by now, there wouldn't be a need for GSoC, would there?23:00
wikingjust cooked myself a turkish coffee so i should have the power ;)23:01
wikingbut yeah the latent svm would be in big need for having structural svm23:01
emreceliktenShould I send you some? :P23:01
wikingemrecelikten: i have some here with me ;P23:01
wikingnot the structural svm23:01
wikingbut the turkish coffee ;)23:01
wikingi want to finish this23:02
wikingi'm aaalmost there23:02
wikingthe minimization is done23:02
wikingnow i just have to figure out the handle the latent variable handling in a nice way23:03
wikingand then i might already be able to run the first dummy test23:03
wikingand actually it'd be great if it really works out23:03
wikingsince it's gonna use libocas23:03
emreceliktenBest of luck to you then :)23:03
wikingwhich suppose to be very efficient23:03
wikingand latent svm will need that23:04
wikingsince each iteration is basically consist of a cutting plane finding....23:05
wikingbut i even have to now construct a dummy test for the whole thing23:05
wikingso that i can see that actually it works :)23:05
wikingnot just calculating infinity23:05
wikingand till now i haven't managed to find a simple train/test set .... so it seem i'll have to construct it myself23:06
emreceliktenYou shouldn't worry about being accepted though, regarding the conversation at top23:09
emrecelikten8 slots is a lot23:09
emreceliktenUnless someone is ninjaing your project, you should be fine I guess23:09
emreceliktenMy "primary" project had 2 slots last year23:10
emreceliktenIt's Spartan-like competition to get accepted there23:10
emreceliktenNot saying that it's easy here or something, but there wasn't any "friendly competition", that's what I meant :D23:16
@sonney2kemrecelikten, wiking - the funny thing is that last year no one applied for the structured output task and the latent svm!23:27
wikingtoo bad23:27
emreceliktensonney2k: I sort of feel bad about not applying to Shogun last year23:36
emreceliktenI discovered it too late23:36
@sonney2kemrecelikten, well it was much tougher to get in last year23:37
@sonney2k5 slots for 73 proposals23:37
emreceliktenIt's like 8 slots for 50 proposals now?23:37
wikingmmm i think i have a problem with git svn rebase ;)23:38
@sonney2kbut decision is easier this year too - what counts is # and quality of patches23:38
emreceliktenZero of which are mine :P23:38
wikingoh i was meaning to ask23:39
wikingdon't we want to do the documentation in ?23:39
wikingeasy to use and nice html can be generated as well23:40
@sonney2kemrecelikten, yeah well ... contributing not just directly before gsoc is also an option... even for next year (if we apply  & get in...)23:43
@sonney2kwiking, but it won't work with doxygen23:43
@sonney2kor are you thinking of writing a new separate doc?23:44
wikingmaybe a bit more of documentation would be good23:46
wikingaaah btw maybe u haven't seen it23:46
wikingabout doxygen23:46
wikingcould we turn on the flag that it actually adds the inherited functions as well to a class interface?23:47
wikingor u have strong objections about that23:47
emreceliktensonney2k: I know, I know :)23:48
wikingi think this is the doxygen option23:51
@sonney2ktoo bad that pluskid is not here :(23:52
@sonney2kwiking, I know that flag - do you really think that helps?23:52
wikingnow that i'm doing latent machine23:53
wikingit's a direct inherited from linear machine23:53
wikingwhich is a machine23:53
wikingso i was like ok let's see what are the funcitons23:53
wikingso i was checking two headers23:54
wikinglooking for what are the pure virtual functions etc23:54
@sonney2kthey apply method of multiclass svm is gone?!?23:54
@sonney2kcannot find it in multiclasskernelmachine either23:54
wikingso it'd be great if for example i could see from the class that i'm inheriting from that ok these are the function i'll need to implement and actually i have by default23:55
@sonney2kwiking, I mean we can try it23:55
wikingso in my case it's helpful23:55
wikingof course now that clang has this amazing error and warning messages23:55
wikingit's actually making a lot easier to see where did i do something wrong with the inheritance and missing some pure virtual function implementation23:55
@sonney2kwiking, send a pull request :)23:56
wikingheheh ok23:56
wikinginterfaces/modular/modshogun.doxy is the config file right?23:56
wikingoh no23:58
wikingthere's Doxyfile ;)23:58
--- Log closed Sat Apr 21 00:00:15 2012