Jump to content

Resource download system


robhol

Recommended Posts

After spending 10 min being refused in my own server from my laptop standing 1m away, I'm quite PISSED and would like to suggest some measures to make the resource download system less horribly retarded and annoying.

Compression

I suggest that, if it's not already done, individual (or ALL) resources should be compressed when needed into a temporary file stored at the server. Instead of sending 100 small files, the server could send this larger archive which would be extracted at the client-side. This seems a lot more efficient to me, than downloading 2000 small files, and it would also almost definitely reduce the total data size. Double win.

As far as CPU/disk space issues go, you might want to make some kind of option for disabling this.

Download speed

I have no idea how the resource download even manages to get something as disgustingly slow as 40kBps on a LOCALHOST CONNECTION, but it's happened. I also have no idea how to improve it, but you really should look into it.

Download ERRORS

Unless you have low blood pressure and ran out of medication, they are useless to the point of "why-the-f***-did-they-make-it-this-way". If something's not being sent by the server, you should get a message (as clear as possible) about what could be wrong, not some generic bullcrap message on the client-side. Also, when the client exits due to a download ERROR, the server should not think it quit intentionally, because.. well, it didn't.

While I realize this might seem a tad like venting/raging, I hope you'll be able to see this from my point-of-view: a system I don't know the inner workings of stubbornly refusing to work without giving any hint of what's wrong. I also have a headache at the moment. This doesn't exactly reduce irritability.

Link to comment

I agree the downloading needs some improvement. Speeds can sometimes vary from extremely slow to just near-normal download rates. Map downloading for instance already takes ages, and then you still need the scripts etc.

Also for servers who have a lot of client resources running there seems to be a certain delay after the (map) download. MTA just hangs for a moment. Then just a moment later, ANOTHER download appears, which immediately times out because the server just kept counting during the freeze. Which is quite... annoying... As you'd need to download the map all over again, which as I noted before, can be extremely slow sometimes.

At the very very least the download timeout should either be much (and when I say much, I mean much much much) longer or removed IMO. Also, if I remember correctly, there was a download error if the download was too slow. I'd suggest scrapping that one too (unless there's a point of it), I don't exactly get the point of that.

Link to comment

Resources aren't currently compressed, that would certainly help.

It'd be interesting if someone can do some controlled experiments to help narrow down the problem - is it the way the client requests the files or how the server sends them? Does using an external server solve the problem?

Link to comment

I'm sure me or DazzaJay has actually mentioned this somewhere before, possibly on the bugtracker. :P

Apart from the map transfer, your first and second point are the same issue. Downloading 2000 lua files takes FOREVER and the constant requests and serverside file reads slow the whole process down.

And sending the files compressed into an archive would fix it. We've seen this effect - one of our servers has custom maps/models, some file sizes going up to 8MB or more. Those transfer as fast as possible - e.g. I get them at my Broadband's rated 20Mb/s. Getting all the admin resource's flag images however...

And for your third complaint - yeah, why should the client get a numeric number and NOT reference that against a string table to give out a friendly error? Such a list is on the wiki, anyone who gets an error and knows of the wiki entry does that cross-reference, it could be built into the client.

To sum up, I can understand that the dynamic map has to be sent as-is. It's likely to change mid-download etc, and therefore there are considerations involved. But how hard is it to compress the resources for the other transfers? Even one zip file per resource would be a significant improvement, and would then avoid on-the-fly compression as they can be prepared and cached on resource load.

Update: Here's a little experiment for the issue you can do on your own PC.

Find yourself 2 folders full of files - one of them lots of small files (your browser's Temp folder is probably a good bet) and one of them a few large files. (I'm sure you have some :P)

Try copying those folders - to another HDD if you have one, otherwise, just to somewhere else will suffice for the example. Monitor the transfer rate.

And you will see the problem.

It's a flaw of filesystems. Small files have consistently become more difficult to handle as HDD's have grown in size.

Link to comment

It's clearly not ideal to send files like this. One thing we could do is precompile the client-side lua server-side and send the result. This would result in a single script file. Other resources could be compressed on-load into a chunk. We'd still have to write them out to individual files, unless we used a solid pak style file on disk (which may be possibly be practical, the very first version of BLUE was designed like that).

The issue is, we currently CRC files to check if they're different to the server's version. If they are, we re-download just those files. With these changes, we'd instead redownload every file. Maybe this won't be an issue.

Link to comment
It's clearly not ideal to send files like this. One thing we could do is precompile the client-side lua server-side and send the result. This would result in a single script file. Other resources could be compressed on-load into a chunk. We'd still have to write them out to individual files, unless we used a solid pak style file on disk (which may be possibly be practical, the very first version of BLUE was designed like that).

The issue is, we currently CRC files to check if they're different to the server's version. If they are, we re-download just those files. With these changes, we'd instead redownload every file. Maybe this won't be an issue.

Erm, just a quick question: with "like this", do you mean the current way or my proposal?

I'm imagining a system (optional as mentioned) where a temporary unified archive is created out of client-side script and data files shortly after resources are initialized after server load. It would, naturally, need to be updated when resources containing client-side files are stopped or started. Compilation' s not a bad idea (actually, I love it,) but by compressing the files we could be reducing the download amount even more.

As for the updating issue, I guess I didn't think about that. But seeing as the initial resource download (especially the admin resource and it's flag icons, *shudder*) is the worst one anyway, this would help quite a bit. If the client has an empty resource directory, it could still use the unified archive approach, I guess. A slightly better alternative (or.. compromise) would be archives for each resource - still not 100% efficient, but a lot less wasteful than re-sending all resources or sending tons of small files. Unless there are frequent changes to LARGE resources (which would be avoidable) that would probably be perfectly usable.

The rest of the files could still be transmitted normally (or compiled individually? This could be another neat feature, if you make it optional) when something changes on resource restart, etc.

(Or you could just install the latest version of the most commonly used default resources into the client-side resource cache when installing.)

Link to comment

I think you need to consider where potential costs are here, and these I assume must revolve around file system overheads and disk access speed. So, to avoid these, we can just avoid using the hard disk, and instead rely on in-memory versions of files.

So, we could make it so that the server loads all the client-side files into memory on resource start. Exceptions would probably have to be made for HUGE files. When the client requests a file we send them this in-memory version. When the client downloads the file, it doesn't write it to disk - it saves it in memory and provides access to the relevant functions. Perhaps we then lazily write it to disk over many frames - it doesn't matter if it doesn't get written to disk, we just download it again next time. Making this all work would depend on all the things that currently load files being able to load from memory instead. Most that I can think of already can...

The cached client-side files could be stored in a compressed file per resource, but this would be fairly hard to get right. It'd be quite hard to sort things out if a file was changed in the server-side resource - the whole client-side file would likely have to be rebuilt. We'd end up suffering the same issues file systems do - fragmentation or expensive writes. I'd say this isn't worth doing.

I'd say the performance of the built-in HTTP server is a significant bottleneck. It runs in it's own thread, but this thread can get blocked by the main thread. This is the aspect I suggested profiling - compare the performance of the built-in server with an external one.

Link to comment

I don't know if this helps but after I get this download error in MTA (windows client 1.0.3) all my HTTP connexions timeout (firefox, MTA server list). This is a nasty bug because it has an impact outside MTA.

Other connections (attempt to reconnect to server, internet radios) still work fine.

It takes like 30 seconds before going back to normal.

Link to comment
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...