The Weekly Top 7 DEV Articles You Should Read
Making Web Better with Blocks, Against The Clean Code, Don't Optimize SQL, Every Business DDoS, Speed VS Safety in deployments, Infrastructure Platforms, Keeping up with Web Development
1- Making the web better. With blocks!
Joel Spolsky proposes a high-level protocol to standardize the creation of blocks inside WEB-based content editors. (27/01/2022)
https://www.joelonsoftware.com/2022/01/27/making-the-web-better-with-blocks/
TL;DR:
Web editors around the web make great use of blocks. A block could be a Calendar, a Code Snippet, a Table, or an Image. In practice, it could be something that is not strictly text.
The UI concept that we can summarize as “Insert Block” is non-standard and proprietary. Every blog or content creation platform has to implement its way to include Blocks into its editor. Non-standardization brings a hard time to the end-user. Editors have to adapt to pretty different tools that do the same thing.
The author proposes a new high-level, open, free, and non-proprietary Block Protocol. The noble purpose is to make life much easier for app developers to support many Block types. And for users to have always the same experience.
In addition, Blocks can have types. It will be possible to specify a schema that a Bock object should follow. In this way, Blocks will become machine-readable.
The implementation of the Block Protocol is in progress, and a very early draft of the block protocol is already accessible.
2- There’s No Such Thing as Clean Code
Steve Barnegren embarks on a crusade against incorrect terminology. (27/01/2022)
https://www.steveonstuff.com/2022/01/27/no-such-thing-as-clean-code
TL;DR:
Clean does not describe any particular characteristic of the code. This word can be related to several aspects without binding with any one of them. For example, a code that is easily readable and understandable could be defined as clean. But clean could also mean elegant or performant; scalable and consistent.
Most of the traits inherently acquired by the clean word are at odds with each other. The most performant code is probably not the most readable.
The point is that the word "clean" often means "good" when related to code. If a developer defines what he just wrote as clean, he doesn't know why it is good. It's fundamental to have a technical discussion about why a solution is better than another. The Clean word is just a shortcut for lazy developers.
In the author's words: "Terms like "clean" allow us to cop-out, rather than working to improve our ability to articulate our ideas."
Furthermore, we need to use precise terms when working in a team. "Clean code" does not have a standardized meaning. It's overwhelmingly better to define code as "encapsulated" or "testable"; "mockable" and "reusable"; "performant" or "simple". Those terms have a precise semantic denotation for everyone.
3- Learn From Google’s Data Engineers: Don’t Optimize Your SQL
During his time in Google's division, Galen B. realized that SQL written by the smartest Data Engineers in the world is inefficient. (27/01/2022)
https://blog.devgenius.io/learn-from-googles-data-engineers-don-t-optimize-your-sql-43f0da30701
TL;DR:
Snapshotting massive tables without caring about the size of data created or using JOIN and EXIST on SCD (Slowly changing dimension) instead of MERGE is the default behavior of Google Data Engineers.
Why do they not care at all about computing optimization or the size of data created?
First of all, modern databases have great real-time optimizations built-in. If the database itself cares about optimization, it's useless to waste time enhancing queries. Data Engineers leave the query optimization to the developers of the data engines they leverage.
Secondly, compared to the salary of a data engineer, the cost of cloud computing is negligible. It's better to invest in a proficient data engineer that can bring new data assets and anticipate the needs of the business instead of wasting time optimizing queries.
Thirdly, the cost of storage nowadays is nearly free. For the same reason, it's better to allocate the time of data engineers to create business value.
4- Reasons Why Every Business is a Target of DDoS Attacks
In this article, we will discover why any Business on the web could be DDoSed successfully. (31/01/2022)
https://thehackernews.com/2022/01/reasons-why-every-business-is-target-of.html
TL;DR:
There is a constant increase in the latest trends of DDoS attacks. In the year 2021, their quantity is increased by 24%. In addition, 74% of them have been multi-vector attacks.
In the past, the danger of being the victim of this kind of cyberattack wasn’t so high. Right now, even Small and Medium Enterprises (SMEs) should worry enough. Even if circa 40% of DDoS are aimed at Banks and Financial platforms, they could be aimed also at other businesses.
The principal reasons why Every Business could be a Target of DDoS Attacks are:
Many companies use old and Traditional technology that doesn’t suffice anymore. Legacy Firewalls are now pretty useless. Lots of SMEs use a lay-back approach. They wait until the worst happens.
Pandemic increased digitalization of organizations all around the world. This has brought a widening of the attack surface.
Technology advancements made DDoS attacks easy to do and cost-effective.
Bringing a website down will decrease ranking in Search Engines. For this reason, It’s a great tool for gaining a competitive advantage. Website availability, together with customer feedback, are resources with really high value.
5- Software Deployment, Speed, and Safety
Marc Brooker an engineer at Amazon Web Services (AWS) compares speed and safety for production changes. (31/01/2022)
https://brooker.co.za/blog/2022/01/31/deployments.html
TL;DR:
We could define “deployment” as all production modification of software or configuration. The great challenge that shows up is the definition of Goals and Tradeoffs between having FAST deployments or SAFE deployments.
The advantages of FAST deployments are the following:
Developers usually want fast releases in production of the code they are working on
Smaller increments of code-base mean arguably minor bug introduction
Fast fixing of security alerts and bugs
Fast development and shipment is better for any customer
The cons are all about RISK and SAFETY. Any new deployment and change in production will contain new flaws. Automated and manual Tests could only help partially to overcome this issue.
An Incremental Deploy reduces greatly the RISKs introduced by those changes, but only if between deployments is passed enough time. That's because flaws need time to show up. Time is not measured with clocks. It's "work" time, better measured by the number of requests that the system has to serve.
The RISK of a bad deployment is also correlated to the size of the system under analysis. If we want high availability, we could rely on Distributed Systems. With a linear investment of resources, we can get an Exponential Availability increase.
6- Building Infrastructure Platforms
Poppy Rowse and Chris Shepherd identify 7 key principles to build Infrastructure Platform teams correctly (01/02/2022)
https://martinfowler.com/articles/building-infrastructure-platform.html
TL;DR:
In the past, adding a simple API for your business was pretty complicated. Nowadays, a developer can build and deploy an APP in production in a few moments. The problem is that every developer could use a peculiar platform and configurations to host and deploy software. By building a Platform is possible to save time, reduce cloud spending, and increase the security and rigor of the infrastructure.
Having a strategy with a measurable goal is the first thing you should consider when thinking about an Infrastructure Platform. To define a strategy you need the right people that will identify a specific problem. For example: "We have had outages of our products totaling 160 hours and over $2 million lost revenue in the past 18 months". Is easy to translate this problem into a GOAL: "Have less than 3 hours of outages in the next 18 months" and finally define a strategy to tackle the problem.
A strategy could be designed using POST MORTEM and FUTURE BACKWARDS sessions. Wich means using both a past and future lens.
The purpose of the POST MORTEM session is to identify the root cause of the problems. The purpose of the FUTURE BACKWARDS session is to identify what would need to be true to meet goals.
At the end of these sessions, hopefully, you will have a wonderfully practical list of things you need to do to meet your goal. Otherwise, you will reach the awareness that spinning up a team to build an infrastructure platform isn't part of your strategy.
7- How to keep up with web development without falling into despair
Baldur Bjarnason lifts the burden of staying up to date with simple tips. (31/01/2022)
https://www.baldurbjarnason.com/2022/i-cant-keep-up-with-web-dev/
TL;DR:
Web development is not unique in the task of keeping up with change. Instead, it is arguably much lower compared to other fields. For example, medicine or biotech.
There are two main reasons that other fields do not have the same problems at keeping up that we do:
They have collective/institutional filters to help you keep up with what is fundamental.
They specialize, without isolating, keeping up only with the knowledge that is a core part of their job.
To filter out this overwhelming flow of information, we can check out news relevant to your work questions ONLY.
To choose our WORK questions correctly is the tricky part. You can start with general-purpose topics, and as you do your research, you rephrase them to become more specific. The questions change with the job. They could range from high to low level, from implementation details to abstraction.
The last tip is to set aside about an hour or two a day for research.
"And that’s how keeping up stops being a chore and becomes an interest-driven research activity that feeds your enthusiasm instead of draining it."