Assorted links for Tuesday, January 7:
- What is Inference Parallelism and how it works
Inference parallelism aims to distribute the computational workload of AI models, particularly deep learning models, across multiple processing units such as GPUs.
- Open Source Innovation Comes to Time-Series Data Compression
NetApp Instaclustr collaborated with the University of Canberra through the OpenSI initiative to develop the Advanced Time Series Compressor (ATSC) — an open source innovation that fundamentally reimagines high-volume time-series data compression.
ATSC implements a sophisticated lossy compression approach. Rather than storing complete data sets, it generates mathematical functions that closely approximate the original data patterns, storing only the essential parameters of these functions. This approach is paired with granular configurability — users can precisely tune their desired level of accuracy, balancing storage efficiency with data fidelity based on their specific use cases.
- What Do You Lose When You Abandon the Cloud?
High-profile moves from 37signals (the company behind Basecamp and HEY) and GEICO have sparked a renewed interest in cloud repatriation.
One sometimes overlooked advantage of moving to the cloud is that it allows you to pay for resources when they are needed, for example, as new customers come online. Spending moves from upfront CAPEX (buying new machines in anticipation of success) to OPEX (paying for additional servers on demand).
Another thing to weigh up is pace of innovation — both from the cloud provider and from the consumer.
The Zynga example [of moving from the cloud to on-prem, then back to the cloud] highlights several other trade-offs. One to consider is that if you are running your own data centers, you need to be able to hire the right people and retain them.
There is another set of trade-offs around security. Keeping servers up to date, and guarding against intrusions, is time-consuming work that big cloud providers are very experienced in.
- Why All the Major Cloud Platforms Are the Same
Each provider brought unique strengths and strategic priorities to the table, creating differentiation initially, but eventually converging on a consistent baseline of functionality.
- Indexing code at scale with Glean
How is Glean different?
- Glean doesn’t decide for you what data you can store.
- Glean’s query language is very general.