Open in new window / Try shogun cloud
--- Log opened Wed Nov 13 00:00:22 2013
-!- shogun-notifier- [] has quit [Quit: transmission timeout]00:31
-!- iglesiasg [] has quit [Quit: Leaving]00:53
-!- hushell [] has quit [Ping timeout: 264 seconds]03:19
shogun-buildbot_build #616 of nightly_default is complete: Failure [failed notebooks]  Build details are at
-!- hushell [] has joined #shogun04:39
-!- hushell [] has quit [Ping timeout: 240 seconds]07:36
-!- new_lido [~walid@] has joined #shogun07:56
-!- new_lido [~walid@] has quit [Ping timeout: 244 seconds]08:00
-!- hushell [] has joined #shogun08:01
-!- new_lido [~walid@] has joined #shogun08:11
-!- sonne|osx [] has joined #shogun08:19
-!- benibadman [~benibadma@] has joined #shogun08:27
-!- sonne|osx [] has quit [Quit: sonne|osx]08:42
-!- benibadm_ [~benibadma@] has joined #shogun08:49
-!- benibadman [~benibadma@] has quit [Ping timeout: 246 seconds]08:52
-!- iglesiasg [] has joined #shogun09:27
-!- mode/#shogun [+o iglesiasg] by ChanServ09:27
-!- iglesiasg [] has quit [Quit: Leaving]09:55
-!- sonne|osx [] has joined #shogun10:03
-!- lisitsyn1 [~lisitsyn@] has quit [Ping timeout: 245 seconds]10:38
sonne|osxguys have you seen? the next europython is in berlin :)10:40
sonne|osxmaybe we should just have our shogun workshop before / after that date :)10:40
-!- sonne|osx [] has quit [Quit: sonne|osx]10:44
-!- hushell [] has quit [Ping timeout: 246 seconds]10:59
-!- iglesiasg [~iglesias@2001:6b0:1:1da0:9895:4f04:e0e5:ccbc] has joined #shogun12:13
-!- mode/#shogun [+o iglesiasg] by ChanServ12:13
@iglesiasgBTW, MKL experts12:26
@iglesiasgdid you see this mail on the mailing lists regarding MKL regression?12:26
@wikingsonne|work: ping?13:10
@wikingbtw why do we need 2 pass reading for processing a CSV file?13:16
-!- sonne|osx [] has joined #shogun13:29
@wikingsonne|work: ping13:31
sonne|osxwiking: pong13:32
@wikingsonne|osx: 1) why do we need 2 pass reading for processing a CSV file? 2) so gunnar doesn't need any help from our side, apart from signatures?13:33
sonne|osxwiking: well it requires much less memory13:34
sonne|osx50% actually13:35
sonne|osxwiking: do you know of any python based github markdown format renderer?13:39
@wikingsonne|osx: so the thing is that libarchive has a problem with seeking :P13:42
@wikingsonne|osx: mmm none pops up in my head13:42
@wikingi guess u googled around already13:42
sonne|osxwiking: but couldn't you then open /close and again open/close ?13:43
@wikingsonne|osx: well i hoped that i dont have to13:43
@wikingsonne|osx: but seems that's the only option13:44
sonne|osxwiking: rationale really is - if you load a small .csv it doesn't matter that you read it twice13:44
@wikingsonne|osx: yeah but that's the thing13:44
sonne|osxwiking: if you read a big one - you wouldn't be able to load it in memory when you read it just once13:44
@wikingi guess if u want to load a gz-ed or bzip2 csv13:44
@wikingthen u rather have a big feature matrix :P13:45
sonne|osxof course you can use hdf5/protobuf based files that have known number of vectors etc13:45
@wikingsonne|osx: reallocing memory ?13:45
sonne|osxwiking: well yeah but not 100% reliable13:46
@wikingsonne|osx: i mean libarchive is actually designed in mind for big archives13:46
@wikingi.e. etc13:46
@wikingwhich of course is going to be super tricky if we want to support that13:46
@wikinge.g. we dont want to :P13:46
@wikingso just one file with various compression would be ok to support i guess13:47
-!- sonne|osx [] has quit [Quit: sonne|osx]14:02
-!- sonne|osx [] has joined #shogun14:03
sonne|osxwiking: yeah...14:03
@wikingsonne|osx: i want to change from FILE* to some other type of stream14:06
sonne|osxwiking: to what?14:06
sonne|osxI mean from FILE you can get the fd14:06
sonne|osxbut hmmh14:07
@wikingthe problem is that we are currently passing around FILE*14:07
@wikingand obviously if we want to use libarchive14:07
@wikingwe have to use for that something else14:08
@wikingas it doesn't provide FILE* stream14:08
sonne|osxwiking: yeah sure but what do you need to use?14:08
@wikingwell some kind of an abstract stream14:09
@wikingwhere i can both wrap a single file fd (or FILE*) and a more complex libarchive stream14:09
sonne|osxwiking: sure do it then14:10
@wikingas for example now i'm not able to do a close/open operation on a libarchive handle... as i only have a handle14:10
@wikingthat doesn't say anything about the filename or fd it was used to open that archive14:10
@wikinghence i do not even know what i need to reopen14:10
@wikingsonne|osx: ok... i mean there's one very straightforward way to do this14:10
@wikingif we can use std:: PP14:11
sonne|osxwiking: could you use fileno(FILE*)14:18
sonne|osxwiking: and then fdreopen?14:18
sonne|osxha fdreopen doesn't exist14:19
@wikingsonne|osx: i do not have a way to get a FILE* or fd from libarchive that actually streaming the uncompressed content14:20
sonne|osxI see14:21
@wikingthere's only archive_read_data_into_fd14:23
@wikingbut that is putting all the content of an archive into the given fd14:24
-!- taylan [d5f4a885@gateway/web/freenode/ip.] has joined #shogun14:24
@wikingwhich is basically a shit in sense of that it writes everything into a file14:24
taylanHi Everyone..14:25
taylanI have a question about the C2 parameter..14:25
@wikingof course this way it's much easier as that FD can be used to generate the FILE*14:25
@wikingand after that we just do the usual business14:25
taylanapparently this parameter controls the weights of the classes, but I don't understand why there is an indirection for it, i.e dividing by C1 before setting teh weights..14:27
taylancan someone point me any clues?14:27
@iglesiasgtaylan, what class?14:27
taylanso ok, I am looking at the libsvm class now, but I think this is the same for other implementations as well14:28
taylansth like this: float64_t weights[2]={1.0,get_C2()/get_C1()};14:28
taylanso C2 indirectly sets the weight of classes..14:28
@iglesiasgtaylan, are C1 and C2 weights por positive and negative examples?14:29
taylanif I am working with an imbalanced dataset, I would like to set this seperately..But I feel like I am missing sth as it is the same for all implementations14:29
@iglesiasgtaylan, well, I see that set_C accepts two arguments14:29
taylanI thought C is the cost parameter for misclassification..14:31
taylanand weights represent the imbalance of the dataset14:32
taylanso ideally I would keep weights constant to the ratio of positive and negative samples, and do a  grid search on C14:32
taylanbut I cannot do a grid search on C without actually changing the weights14:33
@iglesiasgC is the typical regularization14:33
@iglesiasgand in this case you get the option to set different regularization for each of the classes14:33
@iglesiasgbecause that is good when you have skewed class distributions14:34
@iglesiasgI don't quite see where the problem comes14:34
taylanI think there is no problem, it's probably I misunderstand the concept..14:35
taylanso setting the weight in svm is actually shifting the regularization parameter internally, is that correct?14:36
-!- sonne|osx [] has quit [Quit: sonne|osx]14:38
@iglesiasgtaylan, mmm I am not sure if you are calling weight to the regularization14:39
@iglesiasgI believe the alphas are sometimes called weights14:39
-!- sonne|osx [] has joined #shogun14:40
taylanok, I see in libsvm code that the weights are actually the regularization for each class..14:41
taylanThis means that I don't need to do any grid search for the c2 parameter, just on C1 will be enough14:42
taylanI have some other questions, I'd appreciate if you can help14:42
@iglesiasgtaylan, it can be that libsvm allows you to set regularization (your "weights") for each example14:43
@iglesiasgit is just more general14:43
@iglesiasgbut you probably don't want to do model selection allowing every weight to be different14:44
taylanok , I think it's clear now..14:45
taylanfrom libsvm docs:  -wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)14:45
lisitsynoh well14:45
lisitsynit happens all the time and we still didn't manage to fix it14:45
lisitsyntaylan: it is known issue but we don't have good solution to search just for C but not C1 C214:46
lisitsyniglesiasg: hey there14:46
@iglesiasghello hello14:47
lisitsyntaylan: it is just about too much code that uses this C1 C2 :) I am sorry you have to do double work here14:47
taylanso my other question is mainly about training huge datasets. I run out of memory with > 20K samples on 7 features..any tips on this?14:47
lisitsynwith your machine I mean14:47
lisitsyntaylan: just 20K samples and it fails?14:47
taylanlisitsyn, work is no problem..I just want to understand..14:47
lisitsynI find it strange14:47
taylanyep, on that range14:47
taylanlet me give it a try to be sure..14:47
lisitsyntaylan: how much memory do you have?14:47
taylanI mean I can up the memory of course, but I just have the impression that I am doing sth wrong..14:48
lisitsyntaylan: no no should be totally enough14:48
taylanSystemError: Out of memory error, tried to allocate 1599840004 bytes using malloc.14:49
taylanThis is with 20K samples14:49
taylanIt actually fails in the first iteration14:49
lisitsyn1.6 gb14:50
sonne|osxyeah should work14:50
taylanbefore running it I have 2.7GB free..14:50
lisitsyntaylan: can you share the code doing that?14:50
taylansorry, 2.7GB full14:50
taylanah, ok, I think it tries to allocate this 4 times, since I set the parallel to 4..14:52
taylansth like this:14:53
taylan self.gridSearch = GridSearchModelSelection(self.crossValidation, self.paramTree)         p = Parallel()         p.set_num_threads(4)         self.gridSearch.set_global_parallel(p)         best_combination = self.gridSearch.select_model(True)14:53
taylansorry, couldn't paste well enough -))14:53
taylanok, let's say that my dataset is actually a lot there any way to use memory mapped files for feature storage?14:54
dsockwelli had a weird segfault with gridsearch a few months ago14:54
lisitsynwell your dataset is pretty small right now14:54
dsockwellhaven't tried the new release or bothered debugging too much because it was a toy project14:54
sonne|osxwe had an issue with a  memory leak w/ modular interfaces14:54
lisitsyntaylan: are you doing binary classification?14:54
taylanI see, so this might point to a memory leak somewhere..14:54
sonne|osxbut no idea why it would alloc 1.5GB14:55
dsockwellbut you may benefit from running valgrind on your project14:55
taylanyes, binary classification..14:55
lisitsyntaylan: I see absolutely no reason to allocate that much memory14:55
taylanI will give it a try, everything points me to using C++ now, so I wll port the code to C++. I have been using python, it might have an effect as well14:55
dsockwelli had to recompile libshogun targeting i486 to get valgrind to run14:55
lisitsyntaylan: no there is no matter in python14:55
lisitsynit is about internal things14:56
lisitsyndsockwell: why?14:56
taylanok, I will debug this, and come back if I find sth. Just wanted to make sure I'm not doing anything stupid..14:56
lisitsyndid you get illegal instruction?14:56
dsockwelli had compiled it targeting an i7 and valgrind choked on some of the instructions14:56
dsockwellthey're just behind14:56
lisitsyndsockwell: ah you should have updated valgrind ;)14:56
dsockwelli did14:56
-!- krispin [0e8bb973@gateway/web/freenode/ip.] has joined #shogun14:56
lisitsynvalgrind had an issue with i5 and i714:56
lisitsynnewer one doesn't have it14:57
dsockwellsince when is it gone?14:57
dsockwelli might have been doing that before the fix came14:57
lisitsyndsockwell: ah okay14:57
taylanok guys, thanks a lot for your feedback..I have one last question though.14:57
lisitsyndsockwell: then just fyi now it is ok14:57
taylanis there any extension in shogun that can do mpi like parallelization?14:57
sonne|osxwith valgrind you should always just use debug mode and no optimization flags14:57
taylanso if I want to run my training on multiple machines, any clues?14:57
lisitsyntaylan: so it would be helpful if you track down where allocation happens14:58
dsockwellanyway taylan valgrind should see through python into libshogun which is C++14:58
lisitsyntaylan: Heiko has some environment for that iirc14:58
taylanlisitsyn: i will investigate it..14:58
lisitsyntaylan: other way round you can send me some snippet14:58
lisitsynI can take a look14:58
lisitsynbut later tonight14:58
@iglesiasgtaylan, you may want to check out GraphLab for MPI stuff14:58
taylanlisitsyn: It's ok, I believe I can learn a couple of things debugging..some sort of masochism ;)14:59
taylaniglesiasg: thanks, i will have a look at it now..14:59
dsockwellyes if it's your first time using valgrind, put a cloth down to catch the blood15:00
lisitsynmasochist and sadist were put into a jail, masochist cries - hurt me and sadist laughs - I won't I won't15:03
lisitsyntaylan: well if there are leaks - they shouldn't be here, if there are no leaks but it still wants all of your memory I'd suspect a bug here15:04
-!- krispin [0e8bb973@gateway/web/freenode/ip.] has quit [Ping timeout: 250 seconds]15:05
-!- lisitsyn [] has quit [Quit: Leaving.]15:10
sonne|osxor a user error15:15
dsockwellmy personal guess is bad reference counting15:21
-!- FSCV [~FSCV@] has joined #shogun15:24
-!- shogun-notifier- [] has joined #shogun15:29
shogun-notifier-shogun-web: Soeren Sonnenburg :master * 1056a50 / / (8 files):
shogun-notifier-shogun-web: create feature matrix under /page/features15:29
@sonney2kiglesiasg, wiking,
@sonney2k(shift reload the page if it looks weird)15:33
@sonney2kwiking, *lol*15:35
@sonney2kwiking, the link you sent it is really hilarious :))15:37
@wikingsonney2k: how do we force regen of the matrix?15:39
@wikingi see i guess feature_matrix.csv update would solve that15:40
@wikingthe feature groups should be *strong*15:41
@iglesiasgfeature matrix back again then!15:41
@wikingi.e. *General Features* *Supported Operating Systems* etc15:41
@sonney2kwiking, well easy just some css15:42
@wikingsonney2k: go ahead ;)15:42
@sonney2kwiking, the rotated text was not so easy..15:45
@wikingsonney2k: r u feeling comfortable15:45
@wikingwith std::iostream?15:45
@wikingor not so much15:45
@wikinginstead of FILE*15:46
@sonney2kgerman though15:46
@sonney2kgerman radio - moderator freak out15:47
@sonney2kiglesiasg, well it is not really back yet - still hidden. we should get rid of the ? first15:53
@sonney2kiglesiasg, and then proper integration...15:54
@sonney2kwiking, you just run util/ - that will fetch the latest .csv15:54
-!- sonne|osx [] has quit [Quit: sonne|osx]15:56
-!- sonne|osx [] has joined #shogun15:57
@wikingsonney2k: got it15:58
@wikingsonney2k: iostream?15:58
sonne|osxwiking: we are talking the c library libarchive?16:06
sonne|osxiglesiasg: any news on the markdown rendered stuff?16:07
@iglesiasgsonne|osx, what I told you yesterday, I am stuck with one thing16:07
sonne|osxiglesiasg: didn't see it with what?16:07
@iglesiasgso I pretty much put the javascript code in a new template16:08
@iglesiasgbut that is not showing anything16:08
@iglesiasgactually, even if remove the line that includes the markdown.js, it is still the same thing going on16:08
@iglesiasgso I was trying to debug that using the chrome dev tools16:09
@iglesiasgbut don't really get what is going on16:09
@iglesiasgsonne|osx, do you see anything missing?16:09
@wikingsonne|osx: yeps16:10
@wikingsonne|osx: btw have u tried this:
sonne|osxiglesiasg: I guess CORS issues - what does the console say?16:13
@iglesiasgno idea what CORS is16:13
sonne|osxwiking: not sure if this is sufficient but I would actually prefer a server side solution16:14
@iglesiasgyep, I was there16:14
sonne|osxiglesiasg: any errors on the console?16:14
@iglesiasgsonne|osx, Uncaught SyntaxError: Unexpected token ILLEGAL16:15
sonne|osxiglesiasg: or you render server side - maybe that is sufficient for us16:17
sonne|osxiglesiasg: so no .js but just the markdown python -pkg16:18
@wikingsonne|osx: there16:18
@iglesiasggot to run now16:18
@iglesiasgwill ask you later again16:18
@wikingsonne|osx: full explanation about what we need16:18
sonne|osxwiking: sounds good - so you could wrap simple files with that (filename based and so could open / reopen)16:20
sonne|osxwiking: so we have some abstract class that defines these operations16:20
@wikingsonne|osx: purely derived from SGObject?16:21
sonne|osxmaybe we should rename CFile to something else and then use CFile for that16:21
sonne|osxwiking: yes16:21
sonne|osxmaybe CFile -> CDataFile16:21
sonne|osxthen define a new CFile with open/read/write/rewind16:22
@wikingsonne|osx: or just have CFileHandle16:22
@wikingor CFileStream16:22
sonne|osxthat could be a normal FILE underneath or sth16:22
sonne|osxwiking: yes or that16:22
sonne|osxsounds good to me16:22
@wikingCFileStream like an abstract class16:22
@wikingand then inherit from there the simple posix based file reading16:23
sonne|osxso one could make it work with std::iostream / filename / http urls or anything filish16:23
@wikingand the libarchived based one16:23
@wikingwhat we can do is that simply have like CLibarchiveStream as default16:24
@wikingif libarchive is available16:24
@wikingas it supports raw file reading16:24
@wikingi.e. null-compression files16:24
@wikingas well as compressed files16:24
@wikingand if libarchive is not available we use the posix backend by default16:24
sonne|osxyou mean you would add some convenience constructor or what?16:25
sonne|osxif you just work with the abstract class it doesn't really matter...16:25
@wikingsonne|osx: no... i mean that we leave CFile as is (almost.. remove some of the ctors)16:25
@wikingand then if u call16:25
sonne|osxan then?16:25
@wikingCFile(const char* fname, char rw='r', const char* name=NULL);16:25
@wikingthen that would use a default backend (depending what is available)16:26
@wikingsee above16:26
@wikingof course there would be another ctor16:26
@wikingwhere u can set your own backend16:26
@wikingor prefered one16:26
@wikingand as well get reference on the CFileStream16:26
sonne|osxwiking: ahh ok16:27
@wikingbut before doing this16:27
sonne|osxso CFile gets some set_stream() ?16:27
sonne|osxsounds good!16:27
@wikingbut now i'm just looking into io/stream16:27
@wikingi dont know16:27
@wikingeither we try to somehow merge these abstract classes somehow16:28
@wikingor the completely different way to do libarchive support16:28
@wikingis to add only to StreamingFile libarchive16:28
@wikingi.e. StreamingAsciiFile.h16:28
@wikingas actually libarchive as it's arch is more suitable for StreamingFile16:30
@wiking(see the problem with seeking16:30
sonne|osxwiking: na what you proposed sounds pretty good so I would do that rather16:32
@wikingi'm just looking at this16:33
@wikingand actually having this in streaming16:34
@wikingis basically only changing16:34
@wikingand that's aal16:34
@wikingthat's all16:34
@wikinginstead of this whole hacking16:36
@wikingok i'll have a look in this16:37
@wikingas i really hate actually all these io classes hanging around16:37
@wikingThe vote ends Wednesday 13 November 2013 at 17h 27min 00s (local Paris).16:39
@wikingin 1 hour we have the release of results for the shogun e.V16:39
@wikingPlease wait for 47 minutes 35s.16:39
sonne|osxwiking: finally...16:44
shogun-notifier-shogun-web: Soeren Sonnenburg :master * 64cd4da / static/media/feature_matrix.csv,templates/matrix.html,util/
shogun-notifier-shogun-web: add overview table for related toolboxes17:30
shogun-notifier-shogun-web: Soeren Sonnenburg :master * a3e207a / templates/matrix.html,util/
shogun-notifier-shogun-web: make feature bold17:41
sonne|osxwiking: iglesiasg votes are in ...
-!- benibadm_ [~benibadma@] has quit [Ping timeout: 264 seconds]17:55
@wikingsonne|osx: coffin fw within should = streaming?18:00
-!- krispin [0e8bb972@gateway/web/freenode/ip.] has joined #shogun18:14
-!- Saurabh7 [~Saurabh7@] has joined #shogun18:22
-!- lisitsyn [~lisitsyn@] has joined #shogun18:31
-!- lambday [67157f37@gateway/web/freenode/ip.] has joined #shogun18:35
sonne|osxwiking: coffin what?18:36
-!- benibadman [] has joined #shogun18:38
-!- sonne|osx [] has quit [Quit: sonne|osx]18:40
-!- lisitsyn [~lisitsyn@] has quit [Quit: Leaving.]18:41
-!- krispin [0e8bb972@gateway/web/freenode/ip.] has quit [Ping timeout: 250 seconds]18:43
@wikingsonney2k: fw=framework18:44
-!- lisitsyn [~lisitsyn@] has joined #shogun18:46
-!- new_lido [~walid@] has quit [Quit: Leaving]19:05
@wikingok woah19:09
@wikingi've just downloaded 22gigs in 10 seconds :D19:09
lisitsynwiking: with quantum computer?19:11
lisitsynor what?19:11
lisitsynwiking: ahh you put it to /dev/null directly? ;)19:11
@wikingand gigabit19:12
lisitsynwiking: but hard drive throughput?19:13
lisitsynhow can that be?19:13
-!- new_lido [~walid@] has joined #shogun19:15
@wikingthe zfs cache is 14 gigs19:16
@wikinglisitsyn: now i've upgraded myself from working with win7 to working on rdf+sparql19:30
@wikingfucking hell... i'm really suprised how semantic web got boosted in the last 5 years19:30
-!- hushell [] has joined #shogun19:41
lisitsynwiking: I have no idea what these rdfs sparqls are :D19:42
lisitsynpotlw uwcs goyus19:42
@wikingsemantic web19:52
@wikingit's your friend19:52
@wikingsee: freebase.org19:52
@wikingit's fucking crazy19:52
@wikingand amazing19:52
@wikingwe should try to use shogun on that19:52
@wiking(i see it dying on it)19:52
@wikingthe database itself is 250gigs decompressed :)19:52
-!- gsomix [~gsomix@] has joined #shogun19:58
gsomixgood evening19:58
gsomixsorry for my absence19:58
@iglesiasgsonney2k,  100% of votes, amazing :D19:59
@iglesiasghaha this is funny19:59
@iglesiasgwe have taken measures to make it anonymous19:59
@iglesiasgand now it isn't at all, because everyone voted, and voted the same :D20:00
-!- benibadman [] has quit [Remote host closed the connection]20:01
-!- benibadman [] has joined #shogun20:02
lisitsyniglesiasg: three logicians walk into the bar20:06
lisitsynyou know that joke yes? :)20:06
-!- benibadman [] has quit [Ping timeout: 272 seconds]20:07
@iglesiasgmmm I am not sure!20:07
@iglesiasgthe one of half a beer?20:07
lisitsynso the bartender asks20:07
lisitsyndo ALL of you want some bear?20:07
lisitsyn1: I don't know20:07
lisitsyn2: I don't know20:07
lisitsyn3: YES!20:07
lisitsynit is the same catch as we all know who voted for who20:08
@iglesiasgall right, I will catch you later guys20:13
-!- iglesiasg [~iglesias@2001:6b0:1:1da0:9895:4f04:e0e5:ccbc] has quit [Quit: Ex-Chat]20:13
@wikinglisitsyn: :D20:18
@wikinglet's import 250 gigs of rdf20:19
@wikingand then lets index it20:21
@wikingyey what amazing world we live in ;)20:21
-!- benibadman [] has joined #shogun20:28
@wiking"that the huge amount of memory assigned to the indexing tool (32GByte) will only20:31
@wikingbe needed during the final optimization of the created Solr Index"20:31
@wikingi remember the time i bought a 640 megz harddrive20:31
-!- shogun-notifier- [] has quit [Quit: transmission timeout]20:41
-!- hushell [] has quit [Ping timeout: 272 seconds]20:43
-!- hushell [] has joined #shogun20:54
-!- lambday [67157f37@gateway/web/freenode/ip.] has quit [Quit: Page closed]21:37
-!- iglesiasg [] has joined #shogun22:41
-!- mode/#shogun [+o iglesiasg] by ChanServ22:41
@iglesiasgsup guys22:41
-!- benibadman [] has quit [Remote host closed the connection]22:55
-!- benibadman [] has joined #shogun22:56
-!- benibadm_ [] has joined #shogun22:57
-!- benibadman [] has quit [Ping timeout: 245 seconds]23:00
@wikingcrunching data23:13
-!- benibadm_ [] has quit [Remote host closed the connection]23:28
-!- benibadman [] has joined #shogun23:29
-!- benibadman [] has quit [Ping timeout: 248 seconds]23:33
@iglesiasghehe as usual then23:35
--- Log closed Thu Nov 14 00:00:24 2013