0.11.4 patch - 2 questions

"
2girlz1cup wrote:
"
Drakier wrote:
"
Caladaris wrote:
If I have a SSD with PoE on it, it does not matter if the PoE files are fragmented or redownloaded - right? (except maybe with used space, which is not a problem)


even on an ssd you can get better performance by defragmenting your content for. We aren't taking about defragmenting the whole drive, just using the special ggpk defragmenter application.

the benefits of doing that on an ssd are much lower than on a regular mechanical drive however.


This is wrong on every level, do not ever defragment SSD drives! The location of data on SSD is irrelevant retrieval is exactly the same no matter where it is physically located.

Unless you have windows 8 and use default windows defragmenter. Windows 8 is actully not defragmenting SSD drives but it will use TRIM command. But as you posted normal defragmentation on SSD is bad idea.
IGN : Mettiu
"
ionface wrote:
"
2girlz1cup wrote:
This is wrong on every level, do not ever defragment SSD drives! The location of data on SSD is irrelevant retrieval is exactly the same no matter where it is physically located.


What your saying is inaccurate, since sequential reads are faster than random reads. The SSD firmware has no idea about GGPK file format, so it can get all the little packed files out quicker when they are all in a row because they are defragmented with the GGPK Defragmenter.


Except on an SSD there is no rotational latency for the drive (positioning of the read/write head) -- so access time is virtually non-existent, and looking up a file on an SSD is simply referencing the MFT. Sequential or random, it makes little difference. That is the whole draw to an SSD.

"
ozzy9832001 wrote:
Except on an SSD there is no rotational latency for the drive (positioning of the read/write head) -- so access time is virtually non-existent, and looking up a file on an SSD is simply referencing the MFT. Sequential or random, it makes little difference. That is the whole draw to an SSD.


I believe ionface's response was along the same lines as mine in that we aren't talking about access to the Content.ggpk file itself, but the archive sub-files contained within.

Sequential reads within the content.ggpk file will be faster if the internal files are structured as 1 unit rather than random reads to various parts of the file.

If an internal file is broken into 4 pieces, then that is at minimum 4 read operations that are required to stitch that file together. If the internal file is 1 piece, then it can be read in as little as 1 read operation. minimizing read operations is good all around.. mechanical or SSD.

Lets say a read operation takes 1ms (it doesn't, but lets just use that for argument sake). You can read a file that is contiguous in 1ms. Now lets say that file is broken up into 4 pieces because the internal file isn't defragmented. To read that same file is 4ms. That's a huge increase in time to read that same file (relationally).

Report Forum Post

Report Account:

Report Type

Additional Info