Natively, you cannot store files larger than 4 GB on
a FAT file system. The 4 GB barrier is a hard limit of FAT: the file
system uses a 32-bit field to store the file size in bytes, and 2^32
bytes = 4 GiB (actually, the real limit is 4 GiB minus one byte, or
4 294 967 295 bytes, because you can have files of zero length). So you
cannot copy a file that is larger than 4 GiB to any plain-FAT volume.
exFAT solves this by using a 64-bit field to store the file size but that doesn't really help you as it requires a reformat of the partition.
However, if you split the file into multiple files and recombine them later, that will allow you to transfer all of the data, just not as a single file (so you'll likely need to recombine the file before it is useful). For example, on Linux you can do something similar to:
To combine them, just use
Many file archivers also support splitting the file into multi-part archive files; earlier this was used to fit large archives onto floppy disks, but these days it can just as well be used to overcome maximum file size limitations like these. File archivers also usually support a "store" or "no compression" mode which can be used if you know the contents of the file cannot be usefully further losslessly compressed, as is often the case with already compressed archives, movies, music and so on; when using such a mode, the compressed file simply acts as a container giving you the file-splitting ability, and the actual data is simply copied into the archive file, saving on processing time.
Source
However, if you split the file into multiple files and recombine them later, that will allow you to transfer all of the data, just not as a single file (so you'll likely need to recombine the file before it is useful). For example, on Linux you can do something similar to:
$ truncate -s 6G my6gbfile
$ split --bytes=2GB --numeric-suffixes my6gbfile my6gbfile.part
$ ls
my6gbfile my6gbfile.part00 my6gbfile.part01
my6gbfile.part02 my6gbfile.part03
$
Here, I use truncate
to create a sparse file 6 GiB in
size. (Just substitute your own.) Then, I split them into segments
approximately 2 GB in size each; the last segment is smaller, but that
does not present a problem in any situation I can come up with. You can
also, instead of --bytes=2GB
, use --number=4
if you wish to split the file into four equal-size chunks; the size of
each chunk in that case would be 1 610 612 736 bytes or about 1.6 GiB.To combine them, just use
cat
(concat
enate):$ cat my6gbfile.part* > my6gbfile.recombined
Confirm that the two are identical:$ md5sum --binary my6gbfile my6gbfile.recombined
58cf638a733f919007b4287cf5396d0c *my6gbfile
58cf638a733f919007b4287cf5396d0c *my6gbfile.recombined
$
This can be used with any maximum file size limitation.Many file archivers also support splitting the file into multi-part archive files; earlier this was used to fit large archives onto floppy disks, but these days it can just as well be used to overcome maximum file size limitations like these. File archivers also usually support a "store" or "no compression" mode which can be used if you know the contents of the file cannot be usefully further losslessly compressed, as is often the case with already compressed archives, movies, music and so on; when using such a mode, the compressed file simply acts as a container giving you the file-splitting ability, and the actual data is simply copied into the archive file, saving on processing time.
Source
No comments:
Post a Comment