Open in new window / Try shogun cloud
--- Log opened Fri Apr 01 00:00:36 2011
@sonney2kserialhex, I am listening...00:02
* sonney2k is just returning from the fsfe 10 years celebration00:02
blackburnsonney2k: hi, I said to you last week about a little modification of kNN (I'm doing it to get familiar with 'internals' of shogun)00:05
blackburnmy idea is introducing a 'type' of classifier: if k=1, so we don't need to sort - and classification is very-very fast00:06
blackburnsame thing with distance weighting of examples00:06
blackburnwhat's your opinion on that?00:06
@sonney2ksounds valid00:06
blackburnshould it be a different classifier, sth like 'generalized kNN'?00:07
blackburn*I know of being kNN not so useful, but It seems like a good way to learn about shogun development* :)00:08
serialhexsonney2k: so i was planning on doing the ruby bindings for GSoC, and i had a different question then... i have a better question now :P00:15
-!- blackburn1 [~qdrgsm@] has joined #shogun00:16
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 240 seconds]00:16
@sonney2kblackburn1, , I would introduce different train functions one for k =1 and one for general k. But I would always include an array with example weighting00:17
blackburn1but there is no difference when training00:18
blackburn1sonney2k: btw, sorry, disconnected, it is the only answer of you?00:18
@sonney2kblackburn1, yes00:18
serialhexsonney2k: i've been talking to a few people on #ruby-lang and they dont recommend SWIG bindings for various reasons... is there any specific reason i *should* or *would have to* use swig for that project?00:18
@sonney2kblackburn1, sorry applying00:18
@sonney2kserialhex, well then don't use ruby with shogun :)00:20
@sonney2kserialhex, but python :) swig bindings for python are pretty good...00:20
serialhexsonney2k: no, what i was saying was, i *want* to do the project, i was just wondering if theres a specific reason you use swig?00:21
serialhexsonney2k: i mean from what i can tell there are a few nice tools within ruby that makes it easy to do all that... i'll have to explore a bit more but if it's gonna be frowned upon in my application then i'll have to get myself set up differently00:23
blackburn1sonney2k: so, should I implement something like 'classify_with_NN(...)'?00:23
blackburn1and with no respect to this->k, it should classify for k=1, with no sorting, am I right?00:24
@sonney2kserialhex, the reason is that we can easily (if swig is bugfree for $LANG) support many different languages00:24
@sonney2kblackburn1, exactly00:24
blackburn1sonney2k: and another one, 'classify_distance_weighted()', but there is a problem, how to describe distance weight function?00:25
serialhexsonney2k: hmm... ok00:25
@sonney2kclassify_one_nn() / classify_k_nn() would make sense / but always have an array with example importance weights00:25
@sonney2kserialhex, so all we have to do is write the C++ code - no more work needed to write interface code00:26
serialhexsonney2k: that makes sense...00:26
blackburn1eh, not surely understand it, what kind of importance weights? as I see, there is no difference between examples:     classes[train_lab[j]]++;00:27
@sonney2kblackburn1, I thought you wanted to introduce that?00:27
-!- aifargonos [~aifargono@] has quit [Ping timeout: 248 seconds]00:27
blackburn1when using weights by distance we should e.g. classes[train_lab[j]] += inverse_distance_squared(dists[j]);00:27
@sonney2kahh I see00:28
@sonney2k^^ blackburn100:28
blackburn1not importance weights like example one is more important than example two, but example nearest to classifying object is more important than farest, etc00:29
-!- blackburn1 is now known as blackburn00:29
blackburnit could be even a rank function, like classes[train_lab[j]] += pow(0.5,j);00:31
blackburnhow does it seems to you?00:31
blackburnbtw, sonney2k, that did you mean saying '^^'? :)00:33
@sonney2kblackburn, yes when I said I see I meant I understand. Before I was thinking that you might have some weights over examples - like some kind of certainty score that an example can be trusted more.00:35
@sonney2ksorry but I have to leave for now.00:35
@sonney2kcu tomorrow00:35
blackburnsorry for a little misunderstading00:35
blackburnsee you, will implement that idea00:36
-!- bettyboo [] has joined #shogun00:44
-!- mode/#shogun [+o bettyboo] by ChanServ00:44
-!- blackburn [~qdrgsm@] has quit [Quit: Leaving.]00:45
@knrrrdhmmm, it's getting late00:46
@knrrrdbye guys, i am also leaving.00:48
-!- shelhamer [] has joined #shogun01:00
-!- sploving [~root@] has joined #shogun01:48
-!- shelhamer [] has quit [Quit: Get MacIrssi -]02:25
-!- sploving [~root@] has quit [Ping timeout: 240 seconds]03:23
-!- sploving [~root@] has joined #shogun03:27
-!- Ziyuan [~Ziyuan@] has quit []03:34
-!- sploving [~root@] has quit [Remote host closed the connection]03:39
-!- epps [~epps@unaffiliated/epps] has joined #shogun03:48
-!- sonney2k [~shogun@] has quit [Ping timeout: 240 seconds]04:10
-!- sonney2k [~shogun@] has joined #shogun04:11
-!- mode/#shogun [+o sonney2k] by ChanServ04:11
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 246 seconds]04:27
-!- Ryaether [] has joined #shogun04:31
-!- epps [~epps@unaffiliated/epps] has joined #shogun04:41
-!- sonney2k [~shogun@] has quit [Ping timeout: 240 seconds]04:48
-!- sonney2k [~shogun@] has joined #shogun04:49
-!- mode/#shogun [+o sonney2k] by ChanServ04:49
-!- dvevre [b49531e3@gateway/web/freenode/ip.] has quit [Ping timeout: 252 seconds]05:19
-!- shayan [~shayan@] has joined #shogun05:23
-!- shayan [~shayan@] has quit [Quit: Leaving]05:29
-!- shayan_ [~shayan@] has joined #shogun05:29
-!- shayan_ is now known as shayan05:30
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 250 seconds]05:45
-!- shayan [~shayan@] has quit [Quit: Leaving]06:44
-!- sploving [~root@] has joined #shogun06:50
-!- siddharth__ [~siddharth@] has joined #shogun07:19
-!- siddharth_ [~siddharth@] has quit [Ping timeout: 276 seconds]07:23
-!- siddharth__ [~siddharth@] has quit [Ping timeout: 276 seconds]07:31
-!- siddharth__ [~siddharth@] has joined #shogun07:35
-!- sploving [~root@] has quit [Quit: Leaving.]07:35
-!- siddharth_ [~siddharth@] has joined #shogun07:37
-!- sploving [~root@] has joined #shogun07:40
-!- siddharth__ [~siddharth@] has quit [Ping timeout: 260 seconds]07:41
-!- aifargonos [~aifargono@] has joined #shogun08:05
-!- seviyor [c1e20418@gateway/web/freenode/ip.] has quit [Ping timeout: 252 seconds]08:22
-!- aifargonos [~aifargono@] has quit [Ping timeout: 250 seconds]08:27
-!- aifargonos [~aifargono@] has joined #shogun08:44
-!- jabbok [56220ef5@gateway/web/freenode/ip.] has joined #shogun08:57
-!- sploving [~root@] has quit [Ping timeout: 240 seconds]08:58
-!- sploving [~root@] has joined #shogun09:07
-!- siddharth__ [~siddharth@] has joined #shogun09:30
-!- siddharth_ [~siddharth@] has quit [Ping timeout: 248 seconds]09:33
-!- siddharth__ [~siddharth@] has quit [Quit: Leaving]09:46
-!- sploving [~root@] has quit [Ping timeout: 240 seconds]09:46
-!- sploving [~root@] has joined #shogun09:48
-!- siddharth_k [~siddharth@] has joined #shogun09:53
-!- AdaHopper [] has joined #shogun09:57
AdaHoppergood morning09:57
-!- Ryaether [] has left #shogun []10:03
* sonney2k is also back...10:09
* sonney2k just replaced his defective router10:09
@knrrrdhi sonney2k10:17
* sonney2k it feels soo good to be online :)10:17
-!- Ziyuan [~Ziyuan@] has joined #shogun10:18
* Ziyuan hello10:18
@knrrrdwhen did you router failed? today?10:19
-!- aifargonos [~aifargono@] has quit [Ping timeout: 250 seconds]10:19
ZiyuanSo we already have CSparseKernel and CSparseFeatures10:21
@sonney2kknrrrd, 3 days ago... hansenet was quick to send a new one (took 1.5 days) ...10:25
@knrrrdnot bad.10:25
@sonney2kZiyuan, actually sparse kernel is not really used - dot features are a more general framework that replaced them10:26
@sonney2kknrrrd, in particular considering that it took 14hrs until I finally connected it...10:26
-!- aifargonos [~aifargono@] has joined #shogun10:27
@sonney2keven b/g/n wlan10:27
@sonney2klets hope it proves stable...10:27
@knrrrdis it a common brand, such as fritzbox?10:28
@sonney2kno something called alice wlan 1421 or so10:32
@sonney2kthe interface is really for dummies... difficult to use for me (could hardly find the port forwarding...)10:32
ZiyuanSonney, but aren't we supposed to implement the "Sparse Kernel Feature Analysis" this time?10:33
@sonney2kwhat is interesting though is that it has a pppoe-passthrough. so my linksys router behind can do the real job ;-)10:34
@sonney2kZiyuan, the name is similar yes but it is something different - knrrrd will explain I hope :)10:34
@knrrrdZiyuan: sparse kernel feature analysis is closely related to kpca10:37
ZiyuanYes, I've read the two papers10:38
@knrrrdin kpca, you have a set of vectors and your learn a new basis in the kernel-induced feature space.10:38
@knrrrdeach vector of this basis is a linear combination of all training examples10:38
@knrrrdso if you have n training examples and look at the first 3 basis vectors, you are dealing with 3 * n examples10:39
@knrrrdthis is different with kfa, here the basis vectors are sparse.10:39
@knrrrdthat is, a basis vector is only associated with ONE example of the training data10:39
@knrrrdif we are looking at the 3 first basis vectors, we are only dealing with 3 examples and thus can be very fast when processing larger data sets10:39
ZiyuanI thought that we are to implement it via those two classes.10:40
@knrrrdso the implementation of kfa does not requires any special kernel or feature representation10:40
@knrrrdi think it would be best to put everything in a new class.10:40
@knrrrdin principle it should work with any kernel object and feature vector (by virtue of kernels)10:41
@knrrrdi guess sonney2k knows more on how feature vectors and kernel objects match in Shogun10:41
ZiyuanThanks for your detailed explanation :)10:41
* knrrrd brb10:42
ZiyuanBefore asking questions, I'll read the source code of that part at first.10:47
@knrrrdsonney2k: are you still alive?10:48
@sonney2kI am10:48
@sonney2kjust debugging something on mldata.org10:48
@knrrrdI noticed that all kernel-based methods derive from CKernelMachine10:49
@knrrrdwould it make sense to add KFA there? i am not sure. CKernelMachine rather looks like a generic SVM wrapper10:49
@sonney2kknrrrd, I am not sure what KFA does10:50
@sonney2kI mean what is the input and what is the output?10:50
@knrrrdsonney2k: it's a sparse version of kpca10:50
@knrrrdoutput: an new basis (array of array of vectors) ;)10:50
@sonney2kknrrrd, then I would implement it als a preprocessor, have a look at KernelPCACut.h10:51
@sonney2kthat is what is doing kpca in shogun10:51
@knrrrdlooks very good10:52
@knrrrdKernelPCACut.h is still under construction ;)10:53
@knrrrdmost of the code is commented out10:54
@sonney2kknrrrd, yeah true but the kpca itself is done already just not the projection / applying things to features10:55
@knrrrdyep i see.10:55
@knrrrdso CSimplePreProc is a good starting point for KFA10:56
@sonney2kknrrrd, I think so - there is one catch though. You might want to support features other than just real-valued vectors later10:57
@knrrrdyes. I am wondering what the type 'ST' means?10:57
@sonney2ksimple type, e.g. int32_t or float64_t10:58
@knrrrdah ok. then we better derive from CPreProc10:59
@knrrrdmaybe something like CKernelPreProc10:59
@sonney2kknrrrd, so it will work with any kind of inputs - strings etc?11:00
@knrrrdinput set of strings, output n-dimensional representation11:00
@knrrrdjust like kpca11:00
@knrrrdi think it will be straighforward. however, we can only support CFeatures* as input and need to get rid off apply_to_feature_vector11:02
@sonney2kin this case preprocessors are not ideal - they have the (implicit) assumption that the feature type does not change. so I would recommend to introduce KFAFeatures that get as an argument the kernel11:02
@sonney2k... and then produce the n-dimensional feature vectors later11:02
@knrrrdok. would have been my next question, how to change the feature type during conversion11:03
@sonney2kso these coudl be derived from CSimpleFeatures<float64_t> and all you have to do is to store the feature_matrix11:03
@sonney2kso if you manage doing kernel -> feature matrix you are done11:03
@knrrrdthere is still another catch. you can do kfa/kpca on one set of objects and then apply it on another11:03
@knrrrdi think features can not handle this case.11:03
@knrrrdso eventually, it is a CKernelMachine ;)11:04
@sonney2kknrrrd, could be - I guess you have to override the 'classify' functions inside kernel machine then - (classify is misleading now - should be 'apply' then)11:05
@knrrrdsimilar to the clustering algorithms11:07
@knrrrdokay. the more i think about it. KPCA and KFA can been as a regression with multiple outputs. so it's more a kernel machine11:09
@knrrrd^been^been seen^11:09
ZiyuanAnd does KFA need to be regularized?11:12
-!- skydiver [c255a037@gateway/web/freenode/ip.] has joined #shogun11:13
@knrrrdits sensitive to outliers as kpca11:13
@knrrrddim reduction can not really overfit. there are unsupervised11:16
ZiyuanYep, I've got it.11:19
@knrrrdanyone else here interested in kfa?11:19
@knrrrdi am just curious11:19
Ziyuanme, me11:19
skydiverhi Konrad11:20
skydiveri'm interested11:20
@knrrrdokay. any questions?11:21
skydiverdo you know can kfa be applied somehow to dna data?11:24
@knrrrdskydiver: it can.11:25
@knrrrdif you take a string kernel, you can compute a new basis for string data via kfa11:25
-!- bettyboo [] has left #shogun []11:26
-!- bettyboo [] has joined #shogun11:27
-!- mode/#shogun [+o bettyboo] by ChanServ11:27
-!- bettyboo [] has left #shogun []11:30
-!- bettyboo [] has joined #shogun11:30
-!- mode/#shogun [+o bettyboo] by ChanServ11:30
@knrrrdsomething is going on, here. ;)11:30
@knrrrdso i am thinking about applying kfa to string data11:31
skydiverhow cutoff parameter (delta) can be selected for ACKFA?11:31
@knrrrdit would allows us to have a set of strings as input and get a set of vectors as output11:31
ZiyuanSo where would the new classes be placed according to the current class heirarchy?11:31
@knrrrdskydiver: i don't know. i think it depends on the data and we have to test11:31
@knrrrdZiyuan: i have talking with sonney2k on this issue. The best would be to take CKernelMachine as a super-class, altough KFA is not a classifier11:32
ZiyuanAh, I saw it.11:33
@knrrrdi think it will not be too difficult. but also not extremely easy ;)11:36
skydiver@knrrrd if we have for example dataset 200000 features it would be great to reduce feature space size - i think AKFA can be applied11:36
@knrrrdskydiver: exactly.11:36
@knrrrdskydiver: especially if the data is not vectorial but discrete, such as trees and strings11:37
@knrrrdskydiver: then we can compute vectors from the data by using the basis vectors of kfa11:37
skydiver@knrrrd as i see O(l * n^2) where n is size of input patterns, does size of feature space affect on complexity?11:38
@knrrrdskydiver: yes. the size of the feature space is implicitly captured in the kernel computation11:39
-!- AdaHopper [] has left #shogun []11:39
@knrrrdbut don't need to care for this. Shogun has several kernel functions available which are very efficient11:39
skydiver@knrrrd computation of kernel function of two vectors from R^d costs O(d)11:42
@knrrrdskydiver: depends11:42
skydiver@knrrrd so i'd like to can it have affect on O( l * n ^ 2) in AKFA other than a constant11:42
skydiver*i'd like to know11:43
@knrrrdskydiver: i don't think so.11:43
skydiver@knrrrd can you give example of trees and strings input data with lot of features? i haven't used string kernel and want to get some point11:45
@knrrrdskydiver: basically you don't define the feature explicitly but implicity by using a kernel11:46
@knrrrdskydiver: just have a look at papers on string kernels and tree kernels11:46
skydivernow i see11:46
@knrrrdskydiver: often these kernel operate in vector space with millions or infinite many dimensions11:47
@knrrrdskydiver: but they do this efficient and not in O(d) of course ;)11:47
@bettyboo :>11:47
-!- ziyuang [~Ziyuan@] has joined #shogun11:48
@knrrrdkpca has already been used for such applications11:49
@knrrrdhowever, the basis vector of kpca are dense.11:50
@knrrrdif you have a training set with millions of instances, each basis vector is a linear combination of these instances11:50
@knrrrd-> bad and slow11:50
@knrrrdfor kfa, we only have one training instance per basis vector11:50
-!- Ziyuan [~Ziyuan@] has quit [Ping timeout: 246 seconds]11:51
@knrrrdthat is, we can compute an n-dimensional vector from a set of objects by comparing each object only to n instances learned by kfa11:51
@knrrrdanything else related to kfa?11:53
@knrrrdi need to leave in about 10 minutes11:53
@knrrrdbtw. gsoc of shogun has been twittered by PASCAL. Cool ;)11:54
-!- ziyuang [~Ziyuan@] has quit []11:56
-!- Ziyuan [~Ziyuan@] has joined #shogun11:57
skydiver@knrrrd are there any kfa methods other than kpca, skfa and akfa?11:57
@knrrrdskydiver: i guess there are many variants of kpca and spars kpca with different scope and applications11:58
@knrrrdskydiver: most however try to improve kpca in terms of robustness and not run-time performance11:58
@knrrrdskydiver: you are andrew, right?11:59
@knrrrdZiyuan: you are ziyuan. that wasn't too hard.12:00
skydiver@knrrrd i'm very interested to use kfa for my current work with dna data - thanks for some points12:01
ZiyuanYes, I am Ziyuan Lin~12:01
ZiyuanThere is "Kernel Correlation Feature Analysis"12:01
@knrrrdah ok. i guess there is a lot of "Kernel * * Analysis"12:01
@knrrrdkica, kcca and so on12:02
@knrrrdokay. nice talking to you.12:04
-!- skydiver [c255a037@gateway/web/freenode/ip.] has quit [Quit: Page closed]12:05
ZiyuanWhy the "ZeroMeanCenterKernelNormalizer" is implemented without the use of "center_matrix" in Mathematics.h?12:10
-!- aifargonos [~aifargono@] has quit [Ping timeout: 276 seconds]12:32
CIA-30shogun: Christian Widmer boost_serialization * rae4a959 / (4 files in 3 dirs): -
CIA-30shogun: Jonas Behr galaxy * ra73a80f / (6 files in 3 dirs): changes of structure interface for usage in galaxy framework -
CIA-30shogun: Jonas Behr structure * r400aa70 / (3 files in 3 dirs): Debug code removed; possibly large array not on stack any more -
CIA-30shogun: Christian Widmer boost_serialization * rfb8bd16 / (2 files): fixed problem with alphabet save_construct_data -
CIA-30shogun: Soeren Sonnenburg master * rcbef0fb / (3 files in 2 dirs):12:51
CIA-30shogun: Implement snappy compression support (
CIA-30shogun: This code is still experimental and pending benchmarks. (+221 more commits...) -
-!- ziyuang [~Ziyuan@] has joined #shogun12:53
-!- Ziyuan [~Ziyuan@] has quit [Ping timeout: 260 seconds]12:53
-!- ziyuang [~Ziyuan@] has quit [Client Quit]12:57
-!- Ziyuan [~Ziyuan@] has joined #shogun12:58
CIA-30shogun: Soeren Sonnenburg master * rbbf6afa / src/configure :13:04
CIA-30shogun: add flags to support lua,ruby and java via13:04
CIA-30shogun: ./configure --interfaces=libshogun,ruby_modular,lua_modular,java_modular13:04
CIA-30shogun: basic autodetection should work for java and ruby paths. Autodetection13:04
CIA-30shogun: for lua should be perfect. -
CIA-30shogun: Soeren Sonnenburg master * rad96ba3 / (src/ChangeLog src/.authors): remove d.l.s email address from changelog -
CIA-30shogun: Soeren Sonnenburg master * rcbef0fb / (3 files in 2 dirs):13:04
CIA-30shogun: Implement snappy compression support (
CIA-30shogun: This code is still experimental and pending benchmarks. -
@sonney2kZiyuan, check if it is the same and if it is - submit a patch :)13:05
@sonney2kZiyuan, reason is probably that it didn't exist in lib/Mathematics.h at that time13:06
* sonney2k has just recreated the shogun repository for the sake of keeping peace on earth13:07
@knrrrdbettyboo: What's up? Anything new13:10
@bettybooknrrrd: What's up? Anything new13:10
@knrrrdbettyboo: ?13:10
@bettybooknrrrd: i am already feeling like a scientist here13:10
-!- ziyuang [~Ziyuan@] has joined #shogun13:40
-!- Ziyuan [~Ziyuan@] has quit [Ping timeout: 276 seconds]13:42
-!- siddharth_k [~siddharth@] has quit [Ping timeout: 240 seconds]13:43
-!- aifargonos [~aifargono@] has joined #shogun13:45
-!- aifargonos [~aifargono@] has quit [Ping timeout: 260 seconds]13:52
@knrrrdman, tcl is one annoying programming language13:56
@knrrrdjava s.length(), python len(s), tcl [string length s]13:57
-!- siddharth_k [~siddharth@] has joined #shogun13:58
-!- ziyuang [~Ziyuan@] has quit []13:59
@knrrrdwhat a bloat: [string match pattern txt]13:59
-!- Ziyuan [~Ziyuan@] has joined #shogun13:59
@knrrrd!seen sonney2k14:00
@bettybooknrrrd, sonney2k is right here!14:00
@knrrrdbut idle.14:00
@sonney2kknrrrd, all my screen is flashing ;-)14:03
@knrrrdhehe, that's good14:04
-!- aifargonos [~aifargono@] has joined #shogun14:13
-!- bettyboo [] has left #shogun []14:19
-!- bettyboo [] has joined #shogun14:19
-!- mode/#shogun [+o bettyboo] by ChanServ14:20
-!- sploving [~root@] has quit [Ping timeout: 240 seconds]14:35
@knrrrdit took some time, but betty is fine now.14:44
@bettybooknrrrd: 3 days ago... hansenet was quick to send a new one (took 1.5 days) ...14:44
@knrrrdi know14:44
@knrrrdbettyboo, does shogun have a python interface?14:50
@bettybooknrrrd: does shogun have a python interface?14:50
@knrrrdi getting nuts14:50
-!- Tanmoy [75d35896@gateway/web/freenode/ip.] has joined #shogun14:51
-!- epps [~epps@] has joined #shogun14:51
-!- epps [~epps@] has quit [Changing host]14:51
-!- epps [~epps@unaffiliated/epps] has joined #shogun14:51
-!- bettyboo [] has quit [Quit: time to recycle]14:54
-!- bettyboo [] has joined #shogun14:57
-!- mode/#shogun [+o bettyboo] by ChanServ14:57
Tanmoy@sonney2k is there no method described for Vovariance matrix15:08
Tanmoysorrie i mean Covariance matrix15:08
@sonney2kTanmoy, well PCA obviously needs it so the code is probably hidden in there (and should probably be moved into Mathematics)15:09
TanmoyKernel PCA too15:10
Tanmoy@sonney2k there is not mthods for regularisation for ill posed problems15:10
@sonney2kthat's true15:10
Tanmoythat way computations on the Covariance matrix could be reduced15:10
Tanmoyeven if its a dot prdocut15:11
Tanmoy@sonney2k so what preprocessing method is actually there apart from a Kernel Matrix because i anyway need to recompute that15:16
-!- epps [~epps@unaffiliated/epps] has quit [Quit: Leaving]15:20
@knrrrdbettyboo: do you know any further code for pca and covariance?15:25
@bettybooknrrrd: yeah. a lot of work and not many results. so far15:25
-!- epps [~epps@] has joined #shogun15:27
-!- epps [~epps@] has quit [Changing host]15:27
-!- epps [~epps@unaffiliated/epps] has joined #shogun15:27
-!- aifargonos [~aifargono@] has quit [Ping timeout: 264 seconds]15:28
-!- aifargonos [~aifargono@] has joined #shogun15:38
-!- aifargonos [~aifargono@] has quit [Ping timeout: 240 seconds]15:45
-!- aifargonos [~aifargono@] has joined #shogun15:46
-!- skydiver [4deac315@gateway/web/freenode/ip.] has joined #shogun15:53
-!- bettyboo [] has quit [Remote host closed the connection]16:00
-!- bettyboo [] has joined #shogun16:00
-!- mode/#shogun [+o bettyboo] by ChanServ16:00
-!- epps [~epps@unaffiliated/epps] has quit [Read error: Connection reset by peer]16:17
-!- epps [~epps@unaffiliated/epps] has joined #shogun16:17
@knrrrdpretty cool host: @unaffiliated/epps16:18
eppsthanks :)16:19
@knrrrdmakes you look very unaffiliated16:19
-!- seviyor [c1e20418@gateway/web/freenode/ip.] has joined #shogun16:21
@sonney2kTanmoy, I don't understand your question...16:22
-!- rubic [75d35896@gateway/web/freenode/ip.] has joined #shogun16:28
-!- rubic [75d35896@gateway/web/freenode/ip.] has quit [Ping timeout: 252 seconds]17:01
siddharth_ksonney2k, do we have to compile it again?17:29
@sonney2kknrrrd, I've updated the website  and ... comments welcome17:30
@sonney2ksiddharth_k, you have to git clone and compile again yes.17:30
@sonney2ksorry about that17:30
@knrrrdokay i take a look.17:35
@knrrrdbetty: can you also go over the page and check. thx.17:35
@bettybooknrrrd: but you are interested in shogun?17:35
@knrrrdbettyboo: yes but you could help here. seriously17:36
@bettybooknrrrd: seriously. shogun is great17:36
@knrrrdsonney2k: webpage looks good. though it is very wide now17:42
@knrrrdi am not sure if this has been the case before you added gsoc17:42
@sonney2kknrrrd, I don't recall either17:43
@knrrrdwe could ask betty, maybe she knows.17:44
@bettybooknrrrd: that way computations on the Covariance matrix could be reduced17:44
@bettybooknrrrd, ;D17:44
@knrrrd           |17:46
@knrrrd  L   /---------17:46
@knrrrd LOL===       []\17:46
@knrrrd  L    \         \17:46
@knrrrd        \_________\17:46
@knrrrd          |     |17:46
@knrrrdinteresting post on google's new feature to translate pig latin:
-!- blackburn [~qdrgsm@] has joined #shogun18:05
-!- seviyor [c1e20418@gateway/web/freenode/ip.] has quit [Quit: Page closed]18:34
Tanmoy@sonney2k what i mean is that what preprocessing methods are there for Kernel PCA19:14
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 240 seconds]19:51
-!- epps [~epps@unaffiliated/epps] has joined #shogun19:54
@knrrrdAlmost ready for the weekend20:19
-!- dvevre [b49531e3@gateway/web/freenode/ip.] has joined #shogun20:34
-!- skydiver [4deac315@gateway/web/freenode/ip.] has quit [Ping timeout: 252 seconds]20:43
-!- Tanmoy [75d35896@gateway/web/freenode/ip.] has quit [Quit: Page closed]20:49
-!- seviyor [bc19cab5@gateway/web/freenode/ip.] has joined #shogun21:43
-!- alesis-novik [~alesis@] has joined #shogun23:02
-!- seviyor [bc19cab5@gateway/web/freenode/ip.] has quit [Quit: Page closed]23:13
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 260 seconds]23:28
alesis-novikHave you seen the old school youtube or 3d xkcd?23:45
-!- skydiver [c255a037@gateway/web/freenode/ip.] has joined #shogun23:53
-!- epps [~epps@unaffiliated/epps] has joined #shogun23:56
--- Log closed Sat Apr 02 00:00:36 2011