I’ll be there

August 3, 2011

I’ll be there in Berlin too.

So if you have any questions or just wanna talk on the comic plasmoid or KGet feel free to get in touch with me.

Further this will be the first time for me to visit a KDE conference — in this case also a Gnome one — so I am quite thrilled already. πŸ™‚

Comic Plasmoid vs. RSS Readers

May 21, 2011

Last time I mentioned that I am going to post a new enry soon, well guess what my "soon" takes probably longer than your "soon". πŸ˜‰

People who have looked at the feature plan might have seen that I planned to add support for random strips for 4.7. Well I had it working here, though I am not happy with the result, or rather its implementation. Thus that has to wait for 4.8. Yet the other feature I was working on made it in.

If you like comics a lot and have lots of tabs in the comic plasmoid you’ll soon encounter a problem: You simply don’t know which comic has an updated strip. Thus you either click on each tab hoping to finally see how a story proceeds just being disappointed that there still is no update or you’ll might switch to something else. Namely RSS readers. There you see updates quite fast and don’t have to bother with the rest. RSS readers also have disadvantages, it is not so easy to jump to old comic strips, create comic book archives (*hint*), …

What I did was combining the advantages of both. Similar to a RSS reader you can define a timespan where the comic plasmoid will check if there are new comic strips, let’s say every 30 minutes. If it finds new comic strips the tabs will get highlighted — a feature I implemented in Plasma::TabBar (thx for the help plasma guys! πŸ™‚ ). Pressing Ctrl + N or using the context menu makes it possible to directly jump to the next comic strip where a new strip was published. Thus you can have douzens of different tabs open while still being able to navigate to new strips quite fast.

Comic Plasmoid for 4.7: Creating Comic Book Archives, memory leaks ..

April 28, 2011

Lately I worked again on the comic plasmoid and added some features and also removed some.

Bugs and Nepomuk

No, I am not saying Nepomuk is buggy, but that my code (sometimes?) is.

That is why I fixed some comic plasmoid bugs.

Memory leak

One was quite nasty to say the least. In 4.5 I added prefetching. The idea and patch were posted by a person on the plasma-list and the idea is indeed cool:

When you look at a comic strip the previous and next strip are automatically fetched, thus if you click next/previous you don’t have to wait for long. This improved reading comics a lot but added a memory leak. We did not disconnect the source, so it was staying around and using memory.

When looking at many comic strips it could happen that the memory usage of plasma-desktop would sky rocket. So if that is the case for you, then I am sorry.

In fact I wrote to the packagers ml mentioning the commits that fix this and hopefully — no clue if distributions created new packages — you have updates already.


I am using Nepomuk for quite some time in the comic plasmoid by now. That means if you save a comic strip to your hard drive it will get tags automatically. Also the author will be assigned to it etc.

Only that the implementation was partially incorrect. Thanks to Sebastian — I suppose this is another KDE dev we need clones of — the implementation is correct now.

What does that mean for you? Well let’s say you also like Calvin and Hobbes and stored some fantastic strips on your hd, clicking on the author "Bill Watterson" will show all comics you have from that person. Some comics have multiple authors which change, so this could help you finding comics from a specific author.

Also other data is stored via Nepomuk but I don’t want to go into detail here.
And for those who don’t use Nepomuk? Well the comic plugin works fine without it.

Comic Book Archives

One of the features I was adding is support for creating Comic Book Archives. With 4.7 you can right click on a comic and choose "Create Comic Book Archive".

In the following dialog you can choose different types of ranges:

  • Archive all comic strips
  • Archive from the beginning to a certain identifier
  • Archive from the end to a certain identifier
  • Archive a defined range

Depending on the type of the comic (Date, Number, String) you’ll get different input fields.

Clicking ok will start a job which downloads all the needed files and informs you of errors. You can create multiple jobs, there is no limit.

If possible it will determine how many files are to be downloaded and display the percentage that has been downloaded already. When done Nepomuk is used here as well.

Automatic Comic Plugin Updating:

This has not only been a feature request on bko but also something I wanted to do for a long time. It sucked big time to go to the get new comics dialog to see which comics needed updating.

Yet when I first tried to implement auto-updating I was not only greated by a magnitude of "do you want to overwrite xy file" dialogs, but also the updates wouldn’t be recognised as such. Restarting I would be asked again …

These problems were caused by bugs in KNS which I — FOSS for the win — fixed. Now it is working really good. πŸ™‚

Btw. I did not forget about the people who modify comic plugins and don’t want them to be updated. You can turn auto updating off if you don’t like it.


The config dialogs got more and more messy and when I asked Aaron to review a patch of mine he mentioned that. So I started improving those dialogs with the input of Aaron and Todd and imo they are a lot better now.

As you can see choosing comic strips via a combox is not supported anymore. The reason to have them both have a lot to do with the history of the comic plasmoid and me hesitating to remove existing stuff.

Other than that you can’t hide the tab bar anymore. Instead it will be always shown if you have at least two comics selected. That also means that I removed the "press Ctrl + Scroll the mouse" feature for changing tabs. Automatically switching tabs is also removed. No clue if anyone used that feature and I am working on something which should be better anyway.

Btw. the default values might change, I have not decided on them yet.

All this made not only the code more clear but imo also the user interface. There is for sure a lot more to do, but I think this is a lot better already.


This are not all changes I wanna do for the comic plugin for 4.7, so I hope to be able to blog about some other cool things in the next few days/weeks.

PS.: If you wonder why there is no Oxygen style in my pictures, well, I don’t use it on my devel account.
Do I hate Oxygen?
Yeah, I do, I hate breathing it.
No in fact not, but that way it is easier to distinguish it from apps from my normal account which I run over sux. πŸ™‚


November 17, 2010

Often when I follow discussions on KDE on the internet I see people mentioning that they think that KDE-devs do not care about bugs and only about features. Also often mentioned is that applications aren’t polished. Well in fact then there are also people who disagree on this asseration.

So this time instead of "new features" I show you some polishing work that I did lately. In fact I do that quite often (mostly for KGet) as do most other devs I know. Just because polishing achievments aren’t blogged about that often, doesn’t mean that they don’t happen.

Same is true with bugs. Just take a look at bko and the statistics there, especially compare that with the last few years. You’ll notice that the number of bugs stays mostly constant in total despite all the new features and new programs and that there are people fixing them like crazy.


First of all after the last blog entry I still improved the speed in some areas, now (4.6) adding many urls to KGet is blazingly fast. The interesting thing is that all the profiling work started because of a bug entry. A user reported that KGet crashed for him when trying to download 602 files, he also mentioned that it was quite slow with that many files.

This rather parenthetic statement — and in fact the hope to reproduce the bug — triggered me to try it myself as I never downloaded that many files at once before. And well I was shocked. So I started fixing that step by step what took many seconds before takes less than one now and is hardly noticeable. Baseline is we do care about our users and that reporting bugs is really important.

Now there was one thing that annoyed me and Lukas to some degree. Adding Transfers to KGet is not always consistent — and still isn’t now, because we have to keep our DBus-Interface — different dialogs would be displayed. And what I did not like is that if you had existing transfers or files you’d get a dialog for every such occurence. When downloading the 602 files this could get annoying if I forgot to delete them before.

So here you go, now you’ll get one dialog for Transfers with the option to "Yes All" or "No All". Now you have to know that KGet does not ask you for a location for transfers if one is specified already (e.g. via settings in a group), so if a location exists already you’ll get the dialog you know already from moving files around in Dolphin.

In the case that no location has been specified you will see a dialog with all the files you want to download and a location pre-selected. Once you come that far you won’t be bothered with any dialogs. Instead all the information is displayed in the existing dialog.

Yes you can’t rename those files now, but you could choose a different location. Still having a little option less in this case is imo better than having to constantly deal with dialogs. Maybe I’ll simply add another column with the filename, there you could specify it and also an auto-rename-button. Though that is something to think about.


In the above case I mentioned that I do not really like dialogs — despite using them quite often, yeah not easy to do without them πŸ˜€ — but what I like even less are dialogs that make me fear of doing something wrong.

The Dolphin rename-dialog was one of those and many dialogs I created in KGet were such too and many still are. Often you don’t think about the problems and often you don’t have the time to fix them.

So now, what was bad in the rename-dialog? It allowed you to press "Ok" if no name was entered, and if the name was not changed.

Especially the first case is confusing. What would happen if you press "Ok"? Would the file be renamed to nothing and cease to exist? If you tried you’d realize that nothing would happen but you would be informed that renaming to nothing is not possible. Yet why should you try if you are unsure? The whole situation could lead to you being extra careful at this operation and having an uneasy feeling, even though you could basically do nothing wrong. But you’d only know that if you risked doing something wrong.

What I changed is that "Ok" will be disabled if the name was empty or not changed at all. In fact you could argue that the same uneasy feeling applies to if you rename the file to another existing one by accident and I agree though have no solution for that yet.

Spell Check Runner

I was fixing a bug for this runner and then I thought about a feature that I could implement. Now I mentioned above that this is not about features but rather polishing. Though often polishing means adding small features that make the user’s life easier.

Imagine you are me, so you speak German and have your locale set to German. Yet you want to know if you spelled "coding" correctly, So you fire up krunner and press "spell coding" and this is what you get:

Now you’ll realise that it will use a german dictionary by default and not the one you want it to use at the moment. You also wonder what all those suggestions have to do with the term, but that is a different matter. Now you could fire up KWrite, change the spelling locale and then see if it is correct, or you’d search for that term on google and hope that it would recoginse any mistakes. Both cases are not comfortable. What would be great though was if "spell en coding", "spell englisch coding", "spell englisch coding" or "spell en_US coding" would work.

Guess what with 4.6 it will:

You probably have noticed that I wrote "englisch" instead of "English". Simply because "englisch" is the German word for "English" and since I have the German locale set it expects me to enter German words.

Now when you stumble over a different term like "Sackerl" you try it again, though you fail, since that is an Austrian specific term, you change the input to:

In fact if you don’t have the correct dictionary installed it won’t work:

The solution to implement support for language names in the users’ locale is a little hacky, so it won’t recognise any terms you throw it at, e.g. "austrian" or "ΓΆsterreichisch" would not work, in those cases you have to use the iso form "de_AT" for example.

Shell Runner

Here again I was fixing the shell runner and did a small polishing improvement. Now when clicking "run as different user" the focus will directly go to the input field for the user-name. One click less and exactly what a user wants to do when pressing this option in the first place.


So a small feature as in the case of the spell checking runner can make an existing (great) one a lot more useable and makes it more complete. And that is polishing for me. Makeing something existing complete. Be it by little refactorisations of the UI or by adding a small feature that held the original back. All this are things the KDE devs do on a regular basis, they often just don’t report on it.

A way to realise that would be to use the newest KDE version for a long time and then move to an older version, but I am sure you don’t really want to do that. πŸ™‚

Speeding up KGet β€” using Callgrind

September 11, 2010

Old slowness

If you have used KGet you might have noticed that KGet tends to be very slow, or rather sluggish if you have many items in your transferlist. For one it can take very long to start, then long for an operation you want to do and again long to remove many transfers.

I have used callgrind before to track the issues down. Though it turned out that the callgrind output was so confusing — even combined with KCachegrind — that it was hard to find the parts that were of interest for a certain task I wanted to look at.

Efficiently using Callgrind

Now triggered by Jonathan’s blog post (and the followup) on speeding up KRatingPointer via finding bottlenecks through callgrind I did the same again for KGet. But this time I used more sophisticated methods that really make it shine, an easy introduction can be found here.

The idea is that I define the places that should be profiled in KGet itself via some macros (CALLGRIND_START_INSTRUMENTATION; and CALLGRIND_STOP_INSTRUMENTATION; from #include <valgrind/callgrind>). That way only what I want is being profiled and I don’t get profiling information for areas I don’t want. The later part is really important since otherwise the result would be completely distorted (e.g. because of the starting KGet phase etc.). Now I only need to start callgrind with valgrind --tool=callgrind --instr-atstart=no kget --nofork and I get some useful dumps.


So what does using callgrind look like?

I wanted to know what takes long when starting KGet, in specific loading all Transfers. So I added the above mentioned macros to KGet::load(…) at the beginning of the method and at the end. Then running callgrind and then kcachegrind on the data generated something like you can see in the screenshot (sorry for the German).

As you see loading 604 Transfers was quite expensive and most of the cycles happened in TransferTreeModel::addTransfer and there what took so much time was QStandardItem::appendRow — or rather the actions caused by the signals emitted by appendRow. What I did to improve this was adding a TransferTreeModel::addTransfers to add all the transfers at once. I did it a little hacky, i.e. I start a beginInsertRows then deactivate emitting signals for the model as those resulted in constant redrawing of the view connected to the model, then call appendRow multiple times, reactivate the signals again and do a endInsertRows. I only did that hacky way because I did not manage to use QStandardItem::insertRows, it appears not to be what I need.

Those measurs alone resulted in a huge speedup. Once done even more I got something like this:

As you can see the total cost decreased drastically and a lot of time is spend in parsing xml-data. Still there are areas left that can be improved easily. DataSourceFactory::setStatus takes 16% of all cycles — 99% because of signals — taking a closer look one can see that both DataSourceFactory::processedSize and DataSourceFactory::percent are emited 1208 times while the rest is emited 604 times. That means that for each Transfer those two are emitted twice. Emitting them just once gave a 5% speedup, more would be possible by emitting just one signal something like changeEvent(Tc_Status | Tc_Speed | Tc_TotalSize | Tc_Downloadedsize), so not having a signal for each event but rather defining in the send data what changed.


All this lead to many speed improvments in KGet. KGet should now (4.5.2) be a lot faster in the startup process, when scrolling and significantly faster when removing many transfers. I will look into speeding up starting/stopping many transfers too.

Before I go into any details remember that all that was tested on my machine so the results may vary on yours. Also keep in mind that faster here means less cycles, so that does not translate 1:1 in real time, as io operations — Callgrind does not help here — could use few cycles yet still take a lot of time as a HD is very slow compared with other memory. An example for this is the SQlite history backend that, looking at the cycles, only improved marginally (~20%) while the time when deleting ~600 downloads — all those get added to the store — shrank from 51942 msec to 138 msec.

Especially removing transfers got a lot faster. When removing many unfinished transfers it can now be more than 170 times faster than before, also removing finished transfers can be around 85 times faster. In the last case the speedup depends on if the removed transfers are neighbours, i.e. there are no transfers inbetween them, in the worst case it should still be a little faster. At the same time starting KGet with ~600 downloads was improved 35 times.

In fact there are still a lot of areas for further improvement, though I am quite happy with the result already.

So thank you Callgrind devs, Jonathan for your blog entry and especially Kyle Fransham for his great tutorial which should be put on techbase imo! And in fact all others who helped me deserve a thank you was well. πŸ™‚

PS.: No this is not the large feature I was talking about last time, haven’t worked on that for a while. This is rather like bug fixing. πŸ˜‰

KGet is alive too

August 26, 2010

Alive and kicking

There haven’t been blogs on KGet for a long time, but KGet is alive too.

The last few months and weeks were quite busy for us, though we still managed to fix a lot of bugs and 4.5.1 will also see some further fixes. Still there are some serious bugs left that I hope to be able to track down with our users, since sometimes bugs aren’t reproduceable for me.

Besides the bug fixes some ui improvements have entered KGet that should make it more comfortable to use. Now if you resize the header of a list its size and position will be restored the next time you look at the list. This also holds true for some dialogs. Sometimes small details really can change a lot.

KDE Brainstorm

In any case this post is also about a new feature I am working on. I stumbled upon an idea on KDE Brainstorm that made quite some sense. You can set up KGet to monitor your clipboard and add urls automatically as downloads. The main problem is that any url — that is not a local file — would be taken.

What I added is a combined whitelist and blacklist. The user only has to add some rules, and the order of those rules defines their priority. Rules higher in the list have a higher priority than those being lower. Also supporting both Wildcards and RegExp makes it possible to realise some interesting usecases. In fact the added rules can also be edited later on, if the user wants to change them.

At first KGet will look if the url is not a local file, has a protocoll etc. and will then continue with your set rules, in our case it will automatically start downloads for any rar or zip file unless it is from http://kde*

A backdraw you can see in the screenshot is that “Add” has no shortcut — it is a default button provided by KDE — though this is not that bad, you simply enter the pattern and press return and then you can enter the next item.

As you can see I also changed the advanced configuration page a bit, I guess it still needs some work though.

The most interesting part imo is the code though. The feature was quite easy to implement but it is the GUI that took most of the time. Yeah 95% of the new code are just for GUI, not that this code was hard, since it is always the same: 1. create a model 2. create a delegate 3. add means to add/remove data

That really shows some of the strengths of ignoring GUI altogether and resorting on config files. Yet that wouldn’t be “user-friendly” nor discoverable, so we keep it at that. πŸ˜‰ And I really try to make things useable and especially safe. “Safe” that it is hard to make wrong stuff, no adding empty items, editing can’t result in empty items etc. etc. Imo this is one of the keys to make features discoverable. If a user fears that any “wrong” click could destroy something they won’t even look for features that could be of use for them. Often my target is to have a KMines with as few mines as possible. πŸ˜€

I also have another new — very large — feature in my pipeline though I’ll work on it more before disclosing it here.

Git bisect

January 25, 2010

KDE is supposed to move over to git and there has been lots of discussion and blog posts about that lately. Now it is on me to provide a small post myself. Those that know git bisect can skip this one.

Git bisect is a great tool that saved my day several times already and again just today, shortly before the release of KDE 4.4 SC. In short git bisect is there to find faulty commits.

Often when developing you encounter bugs where you know that it worked many revisions ago, but how do you find the revision that introduced the bug? When using git a few commands are enough:

  1. start it: git bisect start
  2. tell git which version is bad: git bisect bad {sha1/tag}?
  3. tell git which version is good: git bisect good {sha1/tag}?
  4. end it: git bisect reset

Basically that is all you need to know and here is an example session (I was at master):

  1. git bisect start
  2. git bisect bad //latest commit is bad
  3. git bisect good 73ea4b2fd5ae39993009dd765c6ff562ceec09da//this commit is good
  4. XY revisions left to test after this (roughly Z steps)
  5. recompiling
  6. testing
  7. either git bisect bad or git bisect good depending if it did not work or if it did work
  8. go to 4 or 9
  9. d4f650537917441fcfd3aa71e0c646b8fc7464ec is the first bad commit//yeah it was me who did crap πŸ˜‰
  10. git reset
  11. Fix and commit

That process is very fast as git uses bisecting as the name suggests e.g. there are 100 commits, 1 is working 100 not, now it checks commit 50, if it works it is checking commit 75 otherwise commit 25 …

Whee and another really stupid and bad bug that was only triggered ocassionally bites the dust.

Baseline? Using git bisect when a commit is wrong or even better use test-cases to avoid wrong commits in the first place.

And another KGet entry

November 9, 2009

There have been some blog posts on KGet and now it is my turn to add another one. πŸ™‚

The last few weeks/months I kept polishing all the changes I did for KGet during GSOC and also introduced new stuff. It is fantastic to see how KGet improved the last few months with all the work we (the KGet team) put into it.

Speeding up downloads

Multisource-Downloading worked and still can work a way that a file is split into segments, then a TransferDataSource is assigned a segment and then downloads it. What that means is that whenever a TransferDataSource finishes the download of a segment it is assigned a different segment and connects again to the server … In fact that is not ideal as connecting to the server takes time, so I changed that.

Now multiple segments e.g. segment 1 to 10 can be assigned to a TransferDataSource, thus the connection is not always closed and recreated once a segment is finished, but only closes once the segment range finished resulting in less resource usage and faster downloads. [1]

Whenever the user decides to use more connections per mirror or when they add another mirror the TransferDataSource that has most undowloaded segments splits the range.


One of those polishing changes I did was to use threads when creating checksums, like when a download is being verified, or when you create a metalink and chose to automatically create checksums. Now these calls are not blocking anymore and you can control KGet during this kind of operations normally.

I also experimented with OpenMP and optimizing the loading of files when calculating the checksums, though the very small speedup does not justify the changes in the code, so it won’t be there.


Other than that I implemented PGP signature checking.

The user has the posibility to enter a signature or in case of a metalink this happens automatically — in fact only if the signature is embed in the metalink. If there is no key for that signature then the user is asked if the key should be searched for. It can even be specified (in the Preferences) in which order the servers should be tried.

Verification preferences

Yes, now all that can be configured. Btw. “Strong” is default.

Ok, so what does it all look like, you may ask? First I changed the transfer settings dialog, to give the user a better overview of what the situation is like for all the files of a transfer, which ones have been verified etc.


The verification dialog now shows which checksums have been verified (in the screenshot I used the “Strongest” option):

Enough talking about side aspects, here is the signature-dialog. I tried to use icons where feasable — to help the user find what could be a problem — it will see some improvments in the following weeks, though I think it is in a releasable state already.

In fact the newly added keys will also appear in KGpg or Kleopatra or any other program using gpg one way or the other, as I’m using gpgme++, btw. thanks to the everyone who worked on gpgme++ makes life way easier than gpgme [3] and thanks to the people who helped me on the ml.

Before you ask so far I do not plan to add support for decrypting, imo you should use other tools for that than a download program.


I also fixed some bugs, now that annoying one where the details of transfers would still be shown even if they were (re)moved is finally gone.

Lukas also worked a lot on KGet like changing the basic model that is used in the view etc. (up to him to blog about that πŸ˜‰ ) and Dario tracked down some rare crashes via using QTests.

On Bugs, if you find some in trunk please report them so that we can fix them before 4.4.


All in all I have the feeling that the next KGet release (with KDE 4.4) will be a great one. That also means that you are encouraged to test KGet from trunk and report bugs and that I’m encouraged to try fixing them in the weeks to follow. πŸ™‚

PS.: This blog post has been in the work for quite a while, but rl caught up,

[1] Thanks to the people on #kde-devel who discussed with me on this issue! And thanks in general for all the help you gave me on the several #kde channels. πŸ™‚

[2] Btw. do you know an easy way to find a good size for a dialog? E.g. all columns in a view should not be smaller than their prefered size, when there is enough space the dialog should expand to show everything –> the dialog you see in the screenshot has been manually resized by me, otherwise it would not look that way.

[3] In search of what I thought was a bug I reprogrammed large parts with gpgme (yummy, error handling with c-libs *turns crazy*), later on it turned out that I just misunderstood something. Meh!

Nepomuk is useful, but is it useable?

October 3, 2009

Lately there where some blogs on Nepomuk and in the comment section there were also some critic points and I want to touch one myself here, the usability.

We have seen that Nepomuk can store (sic) lots of data, lots of “sentences”, like an object has a tag, a date or whatever. That is pretty nice as it enables to connect all that information together.


Now where I see the problem is that accessing all that information basically works the same way: nepomuksearch:/hasTag:OpenCL

That’s the problem! I do not care to enter these lines to learn the verbs, neither do other users especially if they do not know that they have to do it this way. And I’m not even sure if hasTag/tag is localised, if it is not yeah fun for the regions in the world where they use different letters/alphabets than that.

As you see the problem is not really Nepomuk, but rather the present ways to interact with it as a user.

Explaining what you want

As I’ve posted in a comment of a blog some weeks ago the way searching in my opinion should work is the way you explain something to others. You talk about “London”, your counterpart thinks you are talking about the city, so you say “No, not the city but …”

  • “… a book by Jack London”
  • “… our school colleague Jeffrey London”
  • “… a person on irc with that nick”
  • “… the paper I wrote on London’s homeless people”

It is a trial and error, you throw one term and maybe your counterpart instantly knows what you are talking about and if not you have to be more specific, make connections etc.

The perfect search you should always avoid is “Let’s talk about Jeffrey London, you know who we went to school with [nice so far], who was born on 1/1/1970 in Bram, whose parents were …, whose RNA is.”


But this is not to become a troll-entry, so what I propose – actually most likely not do, so drive-by-posting πŸ˜‰ – is imitating and improving Gnome’s tracker search.

Tracker in Ubuntu 7.10 (two years old)

There you have one line where you enter whatever you want to enter, be it names, tags, parts of a document etc. When in their gui you have categories on the left, like “Documents”, “Images” etc., we should add Contacts, Mail, and other categories we find useful, but there shouldn’t be too many.

None has to care about the semantics of a search and as probably most people don’t have thousands of files with “London” somewhere they should find what they want easily.

If they don’t find what they want they can specify one of the categories I mentioned above, add another term to the search term, or in the KDE case they should also have an options-section where they can fine-tune the search:

  • part of filename
  • created (on|before|after|between)
  • last viewed (on|before|after|between)
  • created by (dropdown list with people who were found in the search as author etc.)
  • etc.

And that’s not all, there could be an automatically created section with nepomuk categories that were found with this search like:

  • Tag
  • Contact
  • Mail

essentially this could be merged with the categories mentioned above, though there should be a fallback e.g. if Strigi did not index a file-yet there should still be a category “Images” with it inside.

That should give them results very fast. The target should not be to create the “perfect search” where you get what you want on first try, but rather to have a good starting point where the user can narrow the results down if they want or need to.


Yes I have read about the gsoc projects for nepomuk and like the changes on the search (I’m not sold on the loading/saving), though judging from screenshots (!) it still rather tries to implement a perfect search rather than adding means to improve the non-perfect result.


Yes with all that you would not be able to use all of Nepomuk features but you will be able to use it, everyone will be.

It would be a starting point where improvments could happen, there is no sense in waiting for KDE 4.4++ or whatever to have a nice desktop search, if the capabilities are there already.

Please make Nepomuk useable, you can make it perfect later on.

GSOC KGet — wrapup

August 18, 2009

Now that GSOC is over everyone expects to see the results, so here you go. πŸ™‚

The last few weeks were mostly filled with improving the newly added features and adding some small features here and there. I was able to fix lots of bugs in my code I sometimes experienced and hope that the user experience won’t be that bad. πŸ˜€

KGet now supports:

  • multisource downloading
  • changing the destination of a download while downloading
  • adding/removing mirrors to downloads while you download and changing the number of connections to the mirrors
  • adding checksums to downloads and manually verifiying them
  • automatically searching for checksums on the server (by e.g. appending .md5 to the url)
  • automatically using present checksums to verify a download that finished
  • repairing a broken download (redownloading foul parts or whole download)
  • Downloading a metalink you can define which files should be downloaded and which not, can be changed later in the transfer settings
  • some data is passed from Metalink to Nepomuk, like Publisher etc.
  • MetalinkCreator –> an assistant to create metalinks, currently it is based on the most recent Metalink Draft (v. 12 — not all parts are supported in the GUI yet) so it is bound to change — I’ll probably show a video of the metalinkcreator in action once I have time
  • the parser used to parse the metalink files does also work with “old” metalink-files (v. 3.0 2nd edition) so it could be used to convert them to the new format

Having digital signatures support and a Bittorrent-TransferDataSource did not made it in though.

That’s a wrapup of what I did, I probably forgot a lot as it has been so much code: the diff has ~18.000 lines, more than 10.000 lines have been added a lot of comments, ui stuff and some code ;). [1] Interested people can look at the code here [2], an instruction can be found on the KGet-Ml in the “GSOC — Review”-thread. And yes, in fact it is planned to push all that to trunk, currently the code is being reviewed and changes are made to it.

What will the future bring? I am going to continue to work on KGet and plan to add some features, though the pace of changes will be a little slower now.

Thanks to my mentor Urs and to Lukas who helped me on my way along.

PS.: I’m a lazy blogger. I like writing code more than writing about writing code.

[1] Yeah, I’m proud of my work. πŸ™‚ I didn’t think that I would end up writing so much.

[2] http://github.com/mfuchs/kget-gsoc/tree/GSOC/master