I have file of some random text size = 27 gb and after compression it becomes 40 mb or so.
And a 3.5 GB sql file become 45 Mb after compression.
But a 109 mb text file become 72 mb after compression so what can be wrong with it.
Why so less compressed, it must 10 mb or so, or I am missing something.
All files as I can see is English text only and and some grammar symbols (/ , . - = + etc)
Why?
If not can you tell how can i super compress a text file?
I can code in PHP , np in that.
The compression ratio of a file depends on its content.
Most compression algorithms work by converting repeated data into a single repetition, specifying how many times it was repeated.
For example, a file containing the letter a
1,000,000 times can be compressed far more than a file with completely random content.
For more information, please provide more information.