Nice, thanks for stopping me from running an unknown Linux binary from a 3 year old Russian forum thread :-) Undocumented feature in your webapp; append &w=1 to get a wider image. I wish there was a "make the image square" option in gzthermal.
That's an awesome tool. I wish it would output to the terminal, too.
Thinking about that, as an exercise in Unix philosophy, how would you add hex output to such a tool? One could add a -x option or such that would output like xxd(1), but then you might want to go and implement more of xxd's options like maximum line width and such, leading to the duplication that composition of tools is supposed to prevent. But how would you make it composable? Either gzthermal would need to understand xxd's output, or xxd would need to know about terminal escape codes to properly map the coloured output.
It is an interesting article, but IMO the author is barking up a wrong tree. Compromising clarity to reduce by a few percent the number of bytes sent to the client is unlikely to work well beyond small examples.
For complex objects such local optimizations encourage moving closer to final image representation instead of leveraging higher level capabilities of SVG for vector graphics. If final file size is of utmost importance for a complex object taking a high level view and refactoring as needed will likely work much better than replacing rectangles with path, absolute motions with relative, etc. "Premature optimization is the root of all evil".
It's more digging into what tools like svgo and gzip actually do to a file, and how changes in implementation change the compression behaviour; in 99.9% of circumstances it's something you wouldn't want to do manually as a developer, have it done by an image optimizer instead.
Not sure if readable SVG is an objective in the delivered end-result; I don't think so, just like readable html, js, css etc is not required. Maybe add a source map if you really need to, or put the original svg right next to a `.min.svg` file which the user sees.
But it's a bit of a silly thing really, what really needs to happen is the ability to compile a webapp to an efficiently compressed and easily runnable binary file, instead of this silly minified text that still needs to be parsed client-side.
That's what I thought, too. Using SVG at all instead of images is a huge win for bandwidth. Adding gzip is painless and makes the deliverable tiny. Any further optimization effort would likely be better spent on another part of the site.
Don't get me wrong - it's a great article with very interesting techniques that would be great for very large SVGs. But if your SVG already fits into a single TCP packet (~1500 bytes, according to https://stackoverflow.com/questions/2613734/maximum-packet-s...), does shrinking it further make any difference?
> That's what I thought, too. Using SVG at all instead of images is a huge win for bandwidth. Adding gzip is painless and makes the deliverable tiny. Any further optimization effort would likely be better spent on another part of the site.
Sometimes that’s the last parts left to do.
I’ve recently started working on improving the performance of https://quasseldroid.info/ (I’ve still got to optimize the images), and doing svg optimizations got some SVGs from 14kiB gzipped down to 400 bytes (which meant I could inline them into the CSS). This can be really worth it.
I think that SVG is best done with a 'stylesheet' of SVG definitions that then get 'used' with the use tag in inline SVG. So you make one rectangle and call it something sensible. For icons and other mono elements the 'currentColor' feature is great as that will pull your colour from whatever else is in the div that contains that particular inline SVG, e.g. a 'a' link can surround the inline SVG and get coloured accordingly.
Taking the 'SVG stylesheet' approach means that the actual SVG can be generated, so it can take variables in the backend code, e.g. to localise the icons/images. This can then be ajax loaded into the DOM in such a way that all the SVG images are only loaded once in one reasonably optimised but editable file. Subsequent pageloads will be fine, no need for all of those http requests.
I use SVGO a lot but I think there is a lot to be said for using 'rect' instead of a path.
For some reason I decided to re-do the 18Mb Adobe file that was 'vector' that Instagram have as their definitive icon. I got this down to four lines of SVG, with the result being accurate rather than yet another kid-with-photoshop-redesign effort. Had I taken the 'let's gzip it morely' approach then I would still be waiting for my icon to download.
SVG is not a colored paths container. There are far more things that it can do and far more ways to optimize it.
Instead of SVGO you can use svgcleaner[1] with zopfli, which is a bit better.
Yes, nothing compares to manual optimizations, but if you created an SVG using a vector editor - there is no point in manually removing all the garbage it adds in.
I love manually minifying SVG, JavaScript and HTML to shave off a few bytes here, a few bytes there. I’d really like a tool like gzthermal, but editable and showing compressed size, so that you can make changes and easily see immediately. Bonus points if it comes with a clever diffing technique so that you can minify parts separately and combine them in the most efficient way.
As it is, I just run something like `watchexec -f "*/the-filename" "gzip -c9 < x | wc -c"` in a terminal and it shows me the size whenever I save the file.
1. as sizes grow you'll be better off using CSS to style rather than attributes -- this will often be a much bigger win than fiddling around with #00f vs blue.
2. I think replacing rect's etc. with paths can be problematic esp if you want to animate elements based on attributes i.e. much easier to interpolate from width=10 to width=20 than figure out what the intermediate path commands might look like. Also, if you want people to be able to edit the resulting SVG keeping with semantically meaningful higher level features such as rect and circle will help editors to understand what's going on and provide the power tools for modifying those shapes
> i.e. much easier to interpolate from width=10 to width=20
I know it's specific example but don't browser rules apply here? i.e this should be a scale transform not an element property modification to take advantage of the gpu?
Since the article touches Huffman encoding, wouldn't "#f00" and "#00f" compress better, than "red" and "#00f"? Minimizing uncompressed size is not what you want to pursue.
Mind you, it is not as well written nor with such cool tools/graphics. On the other hand my article more practical for real projects IMHO since I focus more on the code structure and general patterns.
I love reading about those crafted tiny files and micro-optimizations though.
Likely the requirement to be handled by a non-XML parser and to ease adoption. In practice the namespace will always be copied from somewhere because no one ever types it from memory.
Still, IMHO inline SVG is a horrible mess as it is parsed as sort-of XML, has its own weird rules, doesn't show up as written in the DOM Explorer, and breaks on completely strange "errors" like not naming the XLink namespace prefix "xlink". Can I still wish for XHTML?
From what I read, it goes back to the HTML5 vs XHTML disputes. I think the story was something like this:
One of the proposed advantages of XHTML was the plan that at some point you'd be able to embed other XML dialects into XHTML - such as SVG.
HTML5 is famously not XML or SGML but a grammar of its own - and the ability to embed other languages was consciously dropped. However, the use-case of defining SVG images inline was considered important enough that it should be kept.
How to do so if HTML5 is not even XML? Just make a special exception for SVG and declare xml-that's-a-valid-svg a part of the HTML grammar.
Because it's a special case built into the language, there is no need for generic namespace declarations like in XML - so they can be dropped.
Probably dumb question, but I always ask myself if it's changing size significantly to gzip (or brotli) a minified file instead of the original, as space and CR will be the most present character and so will be very well compressed.
On small files, the difference will be of course significant, but in big files ?
Big files the difference is also significant; minification removes and rewrites large chunks in the source code, which also compress to a smaller size. The dictionary for example contains a lot less data. See https://mathiasbynens.be/demo/jquery-size for an example of a library, compressed / minified / zopflied (?), and a comparison in file size. The most recent version in there (3.0.0-alpha-1) is 75 kb "just gzipped", 29KB gzipped + minified. Just minified is 84KB, so gzipping the minified file only shaves off an additional 9 KB compared to just minifying.
When you use CSS to style an SVG, use attribute selector instead of class to save some space.
Also, it some times makes sense to use arcs and quadratic curves instead of Bezier curves. Qaudratic curves save you one argument, and are easier to use with T path element.
Arcs are let you save on geometrically correct ellipsis fragments: to render a correct ellipsis, you need 8 quadratic curves that each need 3 arguments. With arc, you only need 8 arguments
The examples given are completely representative of the sorts of changes you can make to reduce size, and frankly fairly representative of the sorts of compression ratio improvements you’ll get (though for some types of things you can definitely do way better). When talking about optimisation for gzip, you are mostly talking about 1–2% over general uncompressed byte count minification. And in practice a lot of your minification is of small things, e.g. SVG icons in an app or site—they’re easier to minify because of their reduced scope.
Would the compression work better if you inlined the SVG and served the whole HTML+SVG as one gzipped file?
I also think that depending on the SVG complexity, you might shave a few bits, but at the expense of more complex processing - polygon-path vs two square in this example...
That is another thing to consider. Google's mod_pagespeed, iirc, analyzes image/css/js file size and whatnot and determines whether it's more effective to inline it or to load it separately (and does so automatically)
What use cases are there for minifying SVG? It never struck me as something that was large enough to be worth compressing its filesize in any way. It's the CPU/GPU that you usually want to worry about, since only a few characters can cause a lot of time to be spent drawing the frame.
The case for reducing the size of SVG is the same as for any kind of web asset: it reduces the initial transmission time (and improves the likelihood of a cache hit) and so reduces the time taken to load the page.
I doubt if it's worthwhile squeezing the last couple of bytes of gzip compression out of an image (other than as a fun and informative exercise), but when programmatically generating SVGs it's easy to do so naïvely and in these cases you can get big savings in file size. Here's a worked example where I was able reduce a generated image to about 17% of its original size through simple fixes: http://garethrees.org/2013/08/02/svg/
Similarly, tools like Inkscape need to represent information for editing the image (such as its layer or grouping structure; or style information maintained on a per-object basis) which is not necessary for displaying it, and this can amount to a large proportion of the SVG file size.