In the good old days you got pretty far by combining CDN and compression to deliver your frontend code. Throw in sub-genres like cookie-less domains, minimization, caching and some DNS trickery, and you get the picture.
Early in 2013 I did a ramp-up talk at Razorfish in Berlin why this is important, what you can achieve and which tools help you do it. Back then there was no reason, in a Frontend performance 101, to go into lengths about SPDY or HTTP/2.0.
Things have changed. SPDY get adopted by some major players out there, exposing it to many smart people, some of them even writing about it.
Here is a piece by CloudFlare’s John Graham-Cumming, who sheds some light into the difference SPDY makes and why we need to rethink some of our current toolkit to get the best out of it. Thanks for sharing John.
Mark Zeman is a digital Creative Director by day and founder of SpeedCurve by night. When he writes about the performance of responsive websites, he does do some pitching for his product, but the article is worthwhile anyway, mostly because of the pointers to other interesting material. Have fun.
We usually are not in a position to understand the full scope of what we are working on. We assume and estimate, but we do not know. When we are done, we do have a pretty good idea of what we did, but we still do not know whether we actually are done. We also do not know for sure all the flexibility we built into it is ever going to be needed to the extend we thought it would (acquired complexity).
It’s the passion that makes us lose sight of the danger looming ahead, the trap we’re edging towards thanks to our subjective assumptions and vague speculation, the trap of building a overdesigned and overcomplicated system for its own sake. # *
Most of what we develop is very complex yet at the same time it is just glorified text processing and arithmetics.
Complexity in software has two roots. One is the inherent complexity of our surroundings. It is amplified by the need of software to work on the assumption that something is either true or false, where in the physical world this black or white decision is not always possible. The other is bit esoteric: the acquired complexity we think we need to model the undecided physical world in binary. We usually are not in a position to understand the full scope of what we are working on. We assume and estimate, but we do not know. When we are done, we do have a pretty good idea of what we did, but we still do not know whether we actually are done. We also do not know for sure all the flexibility we built into it is ever going to be needed to the extend we thought it would (acquired complexity).
There are many ways to tackle complexity, to break it down and make the goal achievable. Along the way decisions need to be made based on our understanding of the problem at that time. While at the time something sounded like a good idea, it might turn our we have piled up technical debt. We are then forced to pay back the debt by refactoring our code so that it reflects our up-to-date, most of the time much better understanding of the problem.
The question Maxim rises in his article CMS Trap (quoted at the top) is, on first glance, how to speed up the clearing of technical debt or avoid accumulated too much of it in the first place. His context is a special one, aiming at early stages in the development of a website for a start-up. But let us leave this aside for now.
Flexibility is the ability of a system to change it’s behavior. The quicker it can do that, the more flexible it is. While some behaviors can be easily changed, others require much more thought, thus adding complexity. Talking about something that spits out structured markup, displaying a different data point anywhere in a template is not hard. Truncating it by an arbitrary value chosen by an editor needs much more work.
There are two assumptions here. First, that there is an editor next to the developer. Second, that this editor needs control of how to display data without involving the developer.
The fundamental question is how to establish whether an assumption is valid enough to justify adding complexity.
The German parliament's NSA commitee now want's to hear testimonies from Zuckerberg and Schmidt about the NSA wiretapping Germany's communication. Does that make sense?
Reading current newspapers and magazines in Germany could give you the impression we’re done with the Snowden, NSA and GCHQ. They are listening in on all our communications? Even on Frau Merkel’s?
That is not what we should be afraid of. They are friends. Also, we do not want to do anything about it. Let’s rather talk about the evil that is Facebook and Google.
Obviously, they are soft targets. It resonates well with Jane and John Doe’s perception of them as evil exploiters of seemingly private data.
What does that have to do with the Five Eyes tapping into freakin’ sea cables to listen in on every phone call you ask? Nothing, of course. It’s a nothing but smoke screen, a spin doctor’s concoction.
The best part is, this spin comes in handy for two democratic forces that are usually at odds. The executive branch does not want to do anything about the Five Eyes. Not because it fears repercussions, but because going after them would mean to deny themselves to pry open private communications. On the other hand, the so called forth power has been fighting a battle against Google and Co. for years. Instead of innovating, they blame their loss of reach, revenue, importance, credibility, you name it, on Google.
Even if you find the topic overwhelmingly complex (and it is!), you should at the very least be able to tell there is something at work when politics and press mutually agree on something.
[Update] Read Michael “mspro” Seemann’s German article about this spin here. He explores the connections and links between the involved parties in more details and adds even more beef.
PBS’s Frontline has put out the first part of the documentary “United States of Secrets”, explaining in great detail how the US government came to spy on millions of Americans. It’s a two hours watch you might find worthwhile. Checkout this NYT piece about the series.
Also Glenn Greenwalds book “No Place to Hide” is worth your time. Preorder it in print or download to your Kindle today at Amazon. See Wired for a narrative and Ars Technica for images of NSA technicians implanting beacons in Cisco routers.
There would appear to be numerous potential problems with practical implementation of the right to be forgotten. How much time must pass before lawful information becomes “outdated”? Will the rules be applied evenhandedly in separate jurisdictions in Europe? What about professional negligence or low-level public figures trying to control public perceptions; what information does the public have a right to know? #
Lisa Fleisher, Wall Street Journal:
An EU court ruling on Tuesday saying that Google must scrub search results because of personal-privacy concerns might perplex Americans, and yet seem perfectly logical to Europeans. #
Lily Hay Newman, Slate:
“How, ENISA asks, would government force the forgetting of a couple’s photograph when one person wants the photo forgotten and the other doesn’t? And how can data be tracked down and ‘forgotten’ when we don’t even know who has seen or stored it?” Stewart Baker wrote after the report’s release. #
Ignazio Fariza & Rosario E. Gomez, EL PAÍS:
The EU forces Google to remove links to harmful information. #
At this moment, for example, in 1984 (if it was 1984), Oceania was at war with Eurasia and in alliance with Eastasia. In no public or private utterance was it ever admitted that the three powers had at any time been grouped along different lines. Actually, as Winston well knew, it was only four years since Oceania had been at war with Eastasia and in alliance with Eurasia. But that was merely a piece of furtive knowledge, which he happened to possess because his memory was not satisfactorily under control. Officially the change of partners had never happened. Oceania was at war with Eurasia: therefore Oceania had always been at war with Eurasia. (George Orwell, “1984”)
An internet search engine operator is responsible for the processing that it carries out of personal data which appear on web pages published by third parties Thus, if, following a search made on the basis of a person’s name, the list of results displays a link to a web page which contains information on the person in question, that data subject may approach the operator directly and, where the operator does not grant his request, bring the matter before the competent authorities in order to obtain, under certain conditions, the removal of that link from the list of results (Court of Justice of the European Union, “C-131/14”)
Let me rephrase that. A search engine is responsible for the semantic meaning of the documents it indexes.