Open in new window / Try shogun cloud
--- Log opened Wed May 16 00:00:40 2012
-!- emrecelikten [~Anubis@] has joined #shogun00:14
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]02:04
-!- av3ngr [av3ngr@nat/redhat/x-yunsofbtsbsqzqam] has joined #shogun02:15
-!- wiking [] has joined #shogun02:47
-!- wiking [] has quit [Changing host]02:47
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun02:47
-!- emrecelikten [~Anubis@] has quit [Read error: Connection reset by peer]03:07
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]05:11
-!- av3ngr [av3ngr@nat/redhat/x-yunsofbtsbsqzqam] has quit [Remote host closed the connection]05:29
-!- av3ngr [av3ngr@nat/redhat/x-zfabiwenfrymddtr] has joined #shogun05:31
CIA-113shogun: iglesias master * r22685c6 / src/shogun/features/DotFeatures.cpp : * minor indent fix in DotFeatures -
CIA-113shogun: iglesias master * r07bc029 / (2 files): * minor indent fixes -
CIA-113shogun: iglesias master * rf586c7e / (2 files): * minor fixes -
CIA-113shogun: Soeren Sonnenburg master * r6051a71 / (5 files in 2 dirs): Merge pull request #530 from iglesias/fix-indent -
-!- gsomix [~gsomix@] has quit [Ping timeout: 245 seconds]05:57
-!- VarAgrawal [cf2e371e@gateway/web/freenode/ip.] has joined #shogun07:55
-!- VarAgrawal [cf2e371e@gateway/web/freenode/ip.] has quit [Client Quit]07:56
-!- wiking [] has joined #shogun08:49
-!- wiking [] has quit [Changing host]08:49
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun08:49
-!- wiking [~wiking@huwico/staff/wiking] has quit [Client Quit]08:52
-!- av3ngr [av3ngr@nat/redhat/x-zfabiwenfrymddtr] has quit [Quit: That's all folks!]09:39
-!- blackburn [~qdrgsm@] has joined #shogun09:48
CIA-113shogun: Sergey Lisitsyn master * rf4b1c9e / (9 files in 2 dirs): Restored data in ASP (+7 more commits...) -
blackburnwhoa I shouldn't did that09:55
shogun-buildbotbuild #943 of libshogun is complete: Failure [failed compile]  Build details are at  blamelist: vipin@cbio.mskcc.org09:57
-!- n4nd0 [] has joined #shogun12:15
-!- gsomix [~gsomix@] has joined #shogun12:33
gsomixhi all12:33
blackburnthe optics expert in da house12:35
gsomixblackburn, huh :]12:38
n4nd0hi gsomix12:38
-!- wiking [] has joined #shogun12:50
-!- wiking [] has quit [Changing host]12:50
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun12:50
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]13:18
-!- n4nd0 [] has quit [Read error: Operation timed out]13:53
-!- n4nd0 [] has joined #shogun14:01
-!- wiking [~wiking@] has joined #shogun14:02
-!- wiking [~wiking@] has quit [Changing host]14:02
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun14:02
-!- wiking [~wiking@huwico/staff/wiking] has quit [Client Quit]14:02
n4nd0sonne|work: hi! do you have a moment?14:17
sonne|workn4nd0: whats up?14:17
n4nd0sonne|work: so I have been thinking about the features issue we talked about14:18
n4nd0sonne|work: I am not sure if I am getting the design properly14:18
n4nd0take a look here a moment14:18
n4nd0so in the VanillaSOMachine14:20
n4nd0I see that we can use this CJointFeatures::dot and pass it the weight vector14:20
n4nd0to compute a dot product w*psi(x,y)14:20
n4nd0inside this method dot, I don't think we can avoid computing psi(x,y) explicitily14:21
n4nd0we need a function there that computes psi(x,y) and returns an SGVector or an SGSparseVector14:21
n4nd0sonne|work: or did I miss something?14:23
sonne|workn4nd0: that is half true14:25
n4nd0tell me the other half then ;)14:25
sonne|workn4nd0: you only have to have a way to compute the non-zero indices of psi(x,y)14:25
sonne|workso for very sparse Psi/x/y this can be *a lot* less work14:26
sonne|workfor example consider you have sparse x with say 1% of the components non-zero14:26
sonne|workand Psi(x,y) is some block thing (x for each y stacked together) you have to touch just 1% of the data14:28
n4nd0well I understand that is an example of when one would like to return an SGSparseVector14:28
n4nd0then a function Psi(x,y) that returns a vector is required in the model14:30
sonne|workthe dot product returns just a scalar14:30
sonne|workso you need to touch only 1% of the components of w14:30
n4nd0sorry not to return ^, but to use SGSparseVector in the dot product14:30
sonne|workn4nd0: the dense_dot() function is supposed to return just one scalar14:31
sonne|workso you never have to compute anything else other than this dot product14:31
sonne|workbut instead of precomputing some vector you directly compute the dot product14:31
sonne|workso instead of computing a sparse vector, you look up the indices of w - and you only need to look at these indexes that Psi(x,y) is non-zero) multiply with the Psi value and sum them up and return the result14:32
n4nd0mmm woulnd't the trick be not to compute the elements of Psi(x,y) which corresponding w element is zero?14:33
n4nd0I am reading it as: instead of Psi(x,y)?w, do f(x,y,w)14:35
sonne|workthe trick is to not compute Psi(x,y) for dims which are 014:35
n4nd0how can you know where Psi(x,y) is zero before computing it?14:35
sonne|workn4nd0: because you know psi14:36
sonne|workin the simplest case psi is the identity14:36
sonne|workwrt x and block structure wrt y14:36
n4nd0I am sorry but I think I don't see it14:38
sonne|workhave a look at e.g.shogun/src/shogun/features/SparseFeatures.cpp  line 307ff14:39
sonne|workor shogun/features/PolyFeatures.cpp line 143ff14:40
sonne|workor shogun/features/ImplicitWeightedSpecFeatures.cpp lines 172ff14:40
n4nd0sonne|work: I am afraid that we may be talking about different things14:47
n4nd0the function Psi is not a doc product14:48
sonne|workn4nd0: we are talking about Psi(x,y)?w14:48
n4nd0we are talking about the possibility of not computing Psi(x,y) explicitily and doing the product Psi(x,y)?w14:49
sonne|workwhich is computing Psi(x,y)?w :)14:49
n4nd0ok :)14:49
sonne|workand that is what these functions above do14:49
sonne|workwith a more simple Psi not dependent on y14:50
n4nd0I but all those functions above have the equivalent to Psi(x,y) already computed14:50
n4nd0remove I ^14:50
sonne|workthey compute the index to the non-zero idx=Psi(x)14:51
sonne|workidx of Psi(x)14:51
n4nd0from PolyFeatures14:52
n4nd0for (int j=0; j<vec2_len; j++)14:52
n4nd0float64_t output=m_multinomial_coefficients[j];14:52
n4nd0for (int k=0; k<m_degree; k++)14:52
n4nd0there the loop is done for all the elements14:53
n4nd0the loop on j14:53
sonne|workn4nd0: well x is considered to be dense here14:53
n4nd0and in the case it is sparse you will just jump over the zero elements14:55
n4nd0but that can be done because you know the elements that are zero14:55
n4nd0then you already know this Psi(x)14:55
sonne|workn4nd0: no I don't jump over zero elements14:56
sonne|workI only touch the non-zero ones14:56
sonne|workbecause I only store those14:56
sonne|workthat would be the sparsefeatures example14:56
sonne|workplease paste it too :)14:57
n4nd0yes, I understand that case14:57
sonne|workby doing it this way you can support dense and sparse x14:57
sonne|workand even strings etc14:58
sonne|workin the same framework14:58
n4nd0so what you suggest is, instead of defining Psi(x,y), an application dependent function in SO14:59
n4nd0let's define Psi(x,y)*w15:00
sonne|workwhcih also defines w <- w + alpha*psi(x,y)15:02
sonne|worksame principle...15:02
sonne|workn4nd0: note that this enabled me to train on 50mio examples in a 10 million dimensional feature space15:03
n4nd0sonne|work: :)15:03
sonne|workI can tell that this trick is much better than thinking about kernels and SO15:04
n4nd0sonne|work: where does this come from by the way?15:04
n4nd0the trick15:05
sonne|workit is mine...15:05
sonne|workn4nd0: this paper with vojtech
n4nd0sonne|work: I am going to read it, I hope it will help me to understand better what I've to do here15:06
n4nd0and more important, why ;)15:07
CIA-113shogun: Soeren Sonnenburg master * ra60c648 / (9 files in 3 dirs): Merge pull request #527 from pluskid/multiclass-ecoc (+5 more commits...) -
sonne|workn4nd0: one big advantage is that you have to develop just one SO learning algorithm15:50
sonne|workn4nd0: that algorithm will work with all feature types, dense, sparse, strings etc15:50
sonne|workno need to change a line15:50
n4nd0sonne|work: that sounds very nice15:51
sonne|workn4nd0: for example linear svms are usually tweaked for sparse data15:51
sonne|workbut since we have dotfeatures in shogun15:51
sonne|workwe have them tweaked for sparse, dense, strings...15:51
sonne|workand one can even mix (as in concatenate) dense and sparse features...15:51
sonne|workso if you want speed and kernels are no option (which they are not for SO) you want that15:52
n4nd0sonne|work: why is it that kernels cannot be used in SO?15:53
n4nd0where does the lack of speed come from?15:53
sonne|workkernel evaluations everywhere15:54
-!- nicococo [] has joined #shogun16:08
n4nd0hey nicococo!16:08
nicococobuenos fernando16:08
n4nd0buenas Nico ;)16:09
n4nd0nicococo: sonne|work suggested to move our discussions to the channel here, is that ok for you?16:09
nicococothe sonney.. well why not :)16:09
-!- emrecelikten [~Anubis@] has joined #shogun16:10
n4nd0nicococo: so a couple of things I wanted to let you know about16:10
sonne|worknicococo: any objections in using COFFIN aka dotfeatures from the beginning for SO stuff?16:10
nicococoone thing i would change for so toolbox is the CStructuredLoss16:10
n4nd0nicococo: it is about the loss that I wanted to tell you16:12
n4nd0nicococo: look
n4nd0I think this is exactly what we need, and it is already in shogun16:12
nicococoi suspected that ;)16:12
n4nd0I should have looked for it from the beginning, my bad :D16:13
-!- Francis_Chan [~Adium@] has joined #shogun16:13
nicococobut i am not sure it is applicable here  ..16:13
n4nd0why so?16:13
nicococoloss(prediction, label)16:13
nicococothat's fine for svm loss(pred,label) = max(0,1-label*pred) but not so for so16:14
nicococo(structured output)16:14
n4nd0nicococo: maybe this one then
nicococowe need a loss(x)16:15
nicococoyeah.. that is what i am looking for :)16:16
n4nd0but I don't really like how it is implemented there ... with macros16:16
n4nd0I would prefer something similar to the CLossFunction I showed you before16:16
nicococoyep and second derivative is missing16:17
nicococoi know you think it is more elegant, right..16:17
nicococobut they don't make a big difference.. so it is unlikely to play a lot (at least with convex ones) with loss functions in your application16:18
n4nd0nicococo: what about what you said in the beginning, what would you change of CStructuredLoss?16:18
n4nd0aham, I see16:18
nicococowhen you say structured loss everybody expects the delta loss and not the surrounding loss function16:18
nicococojust a matter of naming16:20
n4nd0nicococo: yeah, I agree with you, it is confusing to use that name16:20
nicococoanyway, is there a possibility to extend the CLossFunction in our sense??16:20
n4nd0I think that is the best possibility16:20
nicococoit would be nice to use them, right?16:20
n4nd0to put there some methods loss(z)16:20
n4nd0and so on16:20
nicococoand some indicator functions (convex,smooth,...)16:21
n4nd0these are just like constants for every loss function right?16:21
n4nd0is it interesting to have them as indicator functions for any special reason?16:22
n4nd0or boolean members are just fine16:22
n4nd0stupid question, already in gist...16:22
nicococowell when you use a CRF and have a loss function which states smooth=1 then you can use LBFGS which should be faster than any non-smooth optimization16:23
n4nd0all right16:24
n4nd0nicococo: now related to what sonne|work asked you about COFFIN, we have been discussing about it for quite a while :D16:25
nicococoohh.. wait one more issue16:26
n4nd0yeah tell me16:26
nicococoyou added an apply-function for the SOMachine.. put this should be only a wrapper calling the apply-function of SOModel (which is missing)16:27
nicococothe reason is that in SOModel is already the argmax (=apply) code that is application specific16:27
n4nd0aham I see16:29
n4nd0not exactly a wrapper though, I think16:29
n4nd0since argmax does it just for one feature vector and apply may work on a whole set of features16:30
n4nd0but it should be basically to call iteratively argmax16:30
nicococoso, if apply is for multiple samples then yes16:31
n4nd0it may be for multiple or just for one, it depends on the arguments (it is overloaded)16:31
n4nd0apply() -> the whole CFeatures* member16:32
n4nd0apply(CFeatures*) -> all the features in the argument16:32
n4nd0apply(int) -> just to the indexed16:32
nicococoso, overall a pretty nice job so far :) i am almost sure it will work16:33
nicococowanna talk about the test application?16:33
n4nd0can we talk a moment about the COFFIN thing?16:34
n4nd0I have been a bit stuck with that16:34
n4nd0I understand that the idea would we to provide functions in CStructuredModel to compute w*Psi(x,y) and w <- w + alpha*Psi(x,y)16:35
n4nd0instead of the actual Psi(x,y)16:36
nicococois the sunny-boy here and can give some ideas?16:37
n4nd0sonne|work: around?16:37
nicococodid you already talked to fernando about that?16:38
-!- sonne|work [~sonnenbu@] has left #shogun []16:39
n4nd0nicococo: but do you thing that the function definitions will change a lot?16:40
nicococowell that was a short.. n4and0 soeren suggested to put additional functions into SOModel?16:40
n4nd0take a look from 14:5916:42
n4nd0I think that is interesting that16:43
n4nd0sonne|workn4nd0: one big advantage is that you have to develop just one SO learning algorithm15:5016:43
n4nd0sonne|workn4nd0: that algorithm will work with all feature types, dense, sparse, strings etc16:43
n4nd0since it covers what you said the last day about supporting sparse representation of the joint features16:44
nicococoyep, i have read the coffin paper but never worked with it, so i have no experience at all here ... i guess the trick is to find a way around dot-product here, right?16:46
nicococois it an application thing (SOModel) or more a linear learning machine thing (LinearSOMachine). i prefer the latter one16:48
n4nd0I thought of that16:48
n4nd0I think it is an application thing16:48
n4nd0since we cannot make an abstraction on how Psi looks internally for this16:49
nicococoit is not completely uncoupled. but it is concerned with the representation while learning which is a machine thing ..16:49
nicocococomputation of the dotproduct w'psi16:50
n4nd0in order to do w*Psi(x,y) without computing Psi(x,y) explicitily, it is required to know what Psi does16:50
n4nd0I am trying to think of it as a representation thing not learning16:52
nicococothats true, if you need to know, then it has to be in the SOModel.. but what about the CResult struct .. it is rendered useless, right16:52
n4nd0yeah, this definetely changes CResult16:52
nicococoi mean, you need a special SOModel and a special LearningMachine16:52
n4nd0I don't think you need a spacial LearningMachine for it16:53
nicococothen we have to change the interface again...16:53
n4nd0it is just like instead of making a function Psi(x,y) in the model16:53
n4nd0we do another f(v,x,y) = v*Psi(x,y)16:53
n4nd0where v is any vector16:54
nicococoinstead of computing w'psi in the learning machine it is moved to SOModel16:54
n4nd0later, from the learning machine, one call it as f(w,xi,yi)16:54
n4nd0does that make sense for you?16:54
n4nd0I think it makes too, but this consideration is quite general16:55
nicococowell maybe it works perfectly in general16:55
nicococobut i don't know16:55
n4nd0it could be a good idea to think how this would change the function in a particular example, let's say hmm16:56
nicococo(crf, so-svm, optimization schemes.. what about latent structures)16:56
n4nd0at first sight it could be applied to latent defining a function g(w,h,x,y)16:57
nicococofor hmm the computations are okay with that (i am pretty sure)16:57
n4nd0where h in the latent feature vector16:57
n4nd0but I should talk to wiking about it first16:57
nicococobut you need to construct one contraint per iteration where you need aaccess to the psi(x_i,y_i)-psi(x_i,y_hat)16:58
n4nd0is it that in the latent or the general case?16:59
nicococoin the general case16:59
n4nd0do you mean lines 35 and 36?16:59
nicococookay.. i think the coffin idea may work for algorithm where you can explicitly access the dot-products, right17:00
nicococobut when you have a generic QP solver that is not the case17:00
n4nd0when you said psi(x_i,y_i)-psi(x_i,y_hat)17:01
n4nd0is it that for doing w*( psi(x_i,y_i)-psi(x_i,y_hat) ) ??17:02
n4nd0or the other way round in the difference according to the gist17:02
nicococoin the gist on line 4017:02
n4nd0solve qp??17:02
nicococoyou call a QP solver with a inequality constraint matrix A17:03
nicococoAx <= b17:03
nicococothis matrix A has a block structure and containts all psi(x_i,y_i)-psi(x_i,y_hat) ever constructed17:03
nicococo(here x has also a block structure and contains our solutaion vector w)17:04
n4nd0what do you mean with block structure?17:04
nicococoif you solve a primal svm17:05
nicococothe solution vector x (delivered by QP solver) consists of [w, slacks]17:06
nicococoso this is a block structure17:06
nicococonothing special, just that the vector consists of several variables17:06
nicococobut if we need the psi-psi_hat then for this machine the coffin makes no sense, right?17:07
n4nd0why? In order to do that operation?17:08
nicococobecause we cannot replace the dot-product calculation of the qp-solver17:09
n4nd0but we can give it directly psi-psi_hat17:10
nicococobut then we need an explicit vector for that.. the coffin idea is to avoid that if i understand it correctly17:11
nicococo(e.g. when using 100mio dimensions)17:11
n4nd0we can still avoid it when we do w*Psi(x,y)17:12
n4nd0in the other cases, to pass it to the QP solver, we can just give it Psi-Psi_hat directly17:13
nicococostill an explicit vector17:13
n4nd0yes, that's true ...17:13
n4nd0I don't know, I am kind of confused with this coffin idea17:13
nicococome too :))17:14
@sonney2kguys then read the paper17:14
nicococoread it already.. there are some brackets missing :)17:14
@sonney2knicococo, heh17:14
n4nd0sonney2k: I have read half of it17:14
nicococotable 1 ;)17:15
@sonney2knicococo, isn't Psi huge in practise?17:15
@sonney2khow can you even solve it?17:15
nicococowell if you think 2000 is huge17:15
@sonney2kI mean when you have Psi-Psi_hat's - and I guess one per iteration?17:15
@sonney2kso your feature space is exploding with #iterations?17:16
@sonney2kor do you compress them somehow?17:16
nicococono not my feature space it is the amount of constraints given to the qp solver17:16
nicococofeature space still doesn't exceed 10000 dims17:16
nicococoand stays the same in one application17:17
nicococoso, i understand when using coffin, one can use high dim feature spaces, right?17:18
nicococobut you have to replace any dot-product17:18
@sonney2kyeah but it is #iterations * dim(Psi-Psi_hats)17:18
@sonney2kso the only idea I have here is to return a sparse result for a Psi-Psi_hat difference17:18
@sonney2knicococo, yup17:18
nicococowell, the difference is usually dense17:19
@sonney2kw <- alpha w*Phi(x)17:19
nicococothis means that using coffin in the whole application and then calling a qp solver makes no sense, right?17:20
@sonney2knicococo, these ops
@sonney2knicococo, not true17:20
@sonney2kit makes sense17:21
@sonney2kone could use million dim feature spaces17:21
@sonney2kon many sequences /inputs17:21
nicococo(with qp solver i mean mosek or something where i don't have access to)17:21
@sonney2kthat fit in memory17:21
@sonney2kwhich wouldn't otherwise17:21
@sonney2kyou just have to have few iterations to get convergence17:21
@sonney2kwe have the same problem with ocas17:22
@sonney2k(primal CP svm)17:22
nicococo if i cannot access the solveer to replace the w'psi ?17:22
@sonney2kbut usually things converge fast17:22
@sonney2knicococo, yes you explicitly compute psi-psi_hat17:22
@sonney2kbut you just have these things a few hundred times17:23
@sonney2kand so can probably only use million dim spaces17:23
nicococoagain, for using that trick i need to avoid expand the feature vectors, right or wrong?17:24
@sonney2kyou can do that17:24
@sonney2kjust not very often17:24
@sonney2kif you have say 100000 sequences17:25
@sonney2kyou don't need to compute psi for all of them upfront17:25
nicococookay, i see what you mean..17:26
@sonney2kjust for your solver17:26
@sonney2kso you can squeeze them in memory17:26
@sonney2kcan work with any kind of input features (strings, sparse, dense vectors)17:27
nicococobut for the vanilla linear so svm with say mosek  i don't gain anything17:27
@sonney2kyes for the solver not17:27
@sonney2kbut I suspect that won't take most time anyway but argmax etc17:27
blackburnoh how many messages..17:27
@sonney2kblackburn, have you been hiding under a rock again?17:27
nicococobut when i start writing my own solver based on unconstrained formulations i can possibly gain a lot17:28
@sonney2knicococo, indeed17:28
blackburnsonney2k: something like that17:28
@sonney2kliblinear etc can directly use that17:28
@sonney2knicococo, ocas also has the same problem17:28
@sonney2kwe have to store the CP17:28
* sonney2k has to leave the train17:28
nicococoa bientot17:28
@sonney2kso I guess uricamics17:29
@sonney2kstuff will help here too17:29
@sonney2k(fewer iterations)17:29
@sonney2kand vojtech knows ocas and coffin...17:29
* sonney2k off17:29
n4nd0nicococo: so what do you think then?17:30
n4nd0should we define these functions w*Psi(x,y) too?17:30
nicococon4nd0: it is much clearer know :) and still interesting but need more time to think about it therefore i suggest to start the test multiclass application, what do you think?17:31
n4nd0nicococo: agree17:32
nicococoshould we talk friday again??17:32
n4nd0nicococo: sure17:32
n4nd0nicococo: I will fix the loss issue and start with the vanilla SO finally17:33
n4nd0luckily we will be able to discuss about the multiclass application on Friday ok?17:33
nicococon4nd0: good idea.. and good work too!17:34
n4nd0thank you!17:34
n4nd0I have to go now running :)17:34
nicococosee you on friday (i go biking)17:34
-!- Francis_Chan1 [~Adium@2001:da8:203:1823:ddae:d847:31d9:bf46] has joined #shogun17:34
n4nd0running = I'm a rush :P17:34
-!- n4nd0 [] has quit [Quit: leaving]17:34
-!- nicococo [] has left #shogun []17:34
-!- Francis_Chan [~Adium@] has quit [Ping timeout: 244 seconds]17:37
-!- gsomix [~gsomix@] has quit [Ping timeout: 245 seconds]17:58
-!- Francis_Chan1 [~Adium@2001:da8:203:1823:ddae:d847:31d9:bf46] has quit [Quit: Leaving.]18:24
blackburnsonney2k: I am really dying with removing vec_index!18:38
-!- blackburn [~qdrgsm@] has quit [Quit: Leaving.]18:48
-!- blackburn [~blackburn@] has joined #shogun18:50
-!- puffin444 [62e3926e@gateway/web/freenode/ip.] has joined #shogun19:08
-!- puffin444 [62e3926e@gateway/web/freenode/ip.] has quit [Quit: Page closed]19:17
-!- Marty28 [] has joined #shogun19:22
-!- Marty28 [] has quit [Quit: Colloquy for iPad -]19:32
-!- blackburn [~blackburn@] has quit [Quit: Leaving.]19:44
-!- blackburn [~qdrgsm@] has joined #shogun19:46
@sonney2kblackburn, dying is no option!20:04
@sonney2kshall I do it or what?20:04
blackburnsonney2k: no I'll continue20:06
blackburnsonney2k: but I would rather do it sgreferenced data first20:06
@sonney2kblackburn, it really looks like piece of cake20:08
blackburnsonney2k: removing index?20:08
@sonney2kblackburn, give me 10 minutes - it really should be easy20:09
blackburnhmm it is referenced from sparse features, ascii file, etc20:09
blackburnand even in typemaps20:09
blackburnI gave up in typemaps20:09
@sonney2kwhatever let me just do it20:10
blackburnsonney2k: it is up to you - I can continue20:10
@sonney2kIMHO very little work but we will see20:10
@sonney2kwoah git pull is taking ages..20:11
blackburnyes it is slow today20:11
blackburnsonney2k: have you seen my commit on merging new asp version?20:11
blackburnI am afraid it was mistake but no idea whether I should revert it20:12
blackburnstudent of gunnar asked me to merge it20:12
@sonney2kno idea - I don't have time to maintain it...20:12
@sonney2khe takes care of these things at least a bit20:12
@sonney2kpull finished20:12
blackburnI said that he should rather use PR20:12
* sonney2k removes stuff20:13
blackburnbut merged for the first time20:13
blackburnsonney2k: wait wait20:13
blackburnI can commit my changes20:13
blackburnshall I?20:13
blackburnI'll continue with openmp then20:22
-!- gsomix [~gsomix@] has joined #shogun20:36
@sonney2kblackburn, I think there is one possible use case for vec_index ... when one uses caching of vectors...20:45
@sonney2ktogether with subsets20:46
@sonney2kthen one would need the orig index...20:46
blackburnsonney2k: yes I saw something like that20:46
@sonney2kbut crap we should simplify that20:46
blackburnsonney2k: I do not think we should focus on that right now20:46
@sonney2kand remove caches etc and only when we really need them do better versions...20:46
blackburnjust stay as it is20:46
blackburnsonney2k: how cool compilation is with ccache :D I found out what was wrong 1-2 weeks ago20:56
blackburn'sudo make' breaks everything20:57
blackburnsonney2k: if I do 'sudo make' ccache is not used20:58
CIA-113shogun: Soeren Sonnenburg master * red95843 / (26 files in 6 dirs): drop vec_index from SGSparseVector -
blackburnhmm so simple20:59
blackburngsomix: so how it is with directors?20:59
@sonney2kblackburn, I hope I didn't miss a corner case though21:03
* sonney2k continues with Labels21:04
blackburnhmm my openmp dotfeatures are seems to be b0rken21:04
shogun-buildbotbuild #239 of nightly_all is complete: Failure [failed compile]  Build details are at
blackburnsonney2k: OctaveInterface.cpp:352: error: 'struct shogun::SGSparseVector<double>' has no member named 'vec_index'21:08
gsomixblackburn, nohow.21:14
shogun-buildbotbuild #946 of libshogun is complete: Success [build successful]  Build details are at
shogun-buildbotbuild #866 of cmdline_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org21:24
shogun-buildbotbuild #845 of r_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org21:29
shogun-buildbotbuild #755 of octave_static is complete: Failure [failed compile]  Build details are at  blamelist: sonne@debian.org21:34
shogun-buildbotbuild #832 of python_static is complete: Failure [failed test_1]  Build details are at  blamelist: sonne@debian.org21:38
CIA-113shogun: Soeren Sonnenburg master * r3db552b / examples/undocumented/libshogun/features_copy_subset_sparse_features.cpp : just use i for vector index in examples -
-!- shogun-buildbot [] has quit [Ping timeout: 244 seconds]21:45
-!- shogun-buildbot [] has joined #shogun21:45
shogun-buildbotbuild #833 of python_static is complete: Success [build successful]  Build details are at
shogun-buildbotbuild #867 of cmdline_static is complete: Success [build successful]  Build details are at
shogun-buildbotbuild #846 of r_static is complete: Success [build successful]  Build details are at
-!- wiking [] has joined #shogun22:47
-!- wiking [] has quit [Changing host]22:47
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun22:47
wikingmorning :D22:48
blackburnah right use22:48
blackburnusa :D22:48
blackburnwiking: what is conf?23:53
CIA-113shogun: Sergey Lisitsyn master * r7e00d0e / (src/shogun/features/Labels.cpp src/shogun/features/Labels.h): Added get_binary_for_class method that presents multiclass labels as binary subtask -
--- Log closed Thu May 17 00:00:40 2012