Great talk by Hannes Mühleisen of #DuckDB about tables being a fundamental technology to civilization and not dismissing databases, SQL & ACID just because some implementation are getting old in the tooth.
DuckDB sounds awesome and I know @bert_hubert is a big fan.
Drop #669 (2025-06-23): Monday Morning (Barely) Grab Bag
Rube Goldberg X-traction Pipeline; fplot; Color Everything in CSS
Something for (hopefully) everyone as we start off this brutally hot (in many parts of the northern hemisphere) terminal week of June.
Stay safe out there.
Type your email…
Subscribe
TL;DR
(This is an LLM/GPT-generated summary of today’s Drop using Ollama + Qwen 3 and a custom prompt.)
{fplot}
R package automates the creation of distribution plots by detecting data types and selecting appropriate visualizations, with options for global relabeling of variables (https://lrberge.github.io/fplot/)Rube Goldberg X-traction Pipeline
I don’t see many mentions of Rube Goldberg in pop-culture settings anymore, which is a shame, since I used to enjoy poring over them in my younger days. Perhaps the reason for the lack of mentions is that many data pipelines have much in common with those complex, over-“engineerd” contraptions.
Case in point for a recent “need” of mine: I wanted a way to store posts from users on X into a DuckDB database, for archival and research purposes. I already use XCancel’s ability to generate an RSS feed for an account/search, which I yank into Inoreader for the archival part (the section header shows the XCancel-generated RSS feed for the White House’s other, even more MAGA, propaganda account).
Inoreader’s API is…not great. It can most certainly be machinated (I have an R package with the function I need in it), but I really wanted a solution that let me just use DuckDB for all the work.
Then, I rememberd, if you put feeds in Inoreader folders, you can turn that folder into a JSON feed that gets updates every ~30 minutes or so. This one:
is for a series of feeds related to what’s going on in the Middle East right now.
With that JSON URL in hand, it’s as basic as:
#!/usr/bin/env bash# for cache bustingepoch=$(date +%s)duckdb articles.ddb <<EOQLOAD json;INSTALL shellfs FROM community;LOAD shellfs;CREATE TABLE IF NOT EXISTS broadcast_feed_items ( url VARCHAR PRIMARY KEY, title VARCHAR, content_html VARCHAR, date_published VARCHAR, tags VARCHAR[], authors JSON);-- this is where the update magic happensINSERT OR IGNORE INTO broadcast_feed_itemsFROM read_json('curl -s https://www.inoreader.com/stream/user/##########/tag/broadcast/view/json?since=${epoch} | jq .items[] |')SELECT url, title, content_html, date_published, tags, authors;-- Thinned out JSON content for viewing appCOPY ( FROM broadcast_feed_items SELECT content_html, -- "title" is useless for the most part since this is an X post date_published AS "timestamp", regexp_replace(authors.name, '"', '', 'g') AS handle) TO 'posts.json' (FORMAT JSON, ARRAY );EOQ
There are other ways to unnest the data than using jq
and the shellfs
DuckDB extension, but the more RG the better (for this post)!
So the final path is:
X -> XCancel -> XCancel RSS -> Inoreader -> Inoreader JSON -> jq -> DuckDB
with virtually no code (save for the snippet, above).
I’ve got this running as a systemd timer/service running every 30 minutes.
Later this week (when I’m done hand-coding it—yes, sans-Claude), I’ll have a Lit-based vanilla HTML/CS/JS viewer app in one of the Drops.
fplot
(This is an #RStats section, so def move along if that is not your cuppa.)
My daily git-stalking led me to finding this gem of an R package.
{fplot}
(GH) is designed to automate and simplify the visualization of data distributions (something I have to do every. single. day.). Its core mission is to let folks quickly generate meaningful and aesthetically pleasing distribution plots, regardless of the underlying data type (it supports continuous, categorical, or skewed), by making spiffy choices about the appropriate graphical representation for each variable.
Functions in the package detect the nature of your data (e.g., categorical vs. continuous, skewed or not) and automatically selects the most suitable plot type. For example, it will not use the same visualization for a categorical variable as it would for a continuous one, and it adapts further if the data is heavily skewed.
Ergonomics are pretty dope, since you only need a single line of code to generate a plot, with the package handling the details of layout and type selection. This is particularly useful for exploratory data analysis or for folks who want quick, visually appealing graphics without extensive customization.
Tools are provided to globally relabel variable names for all plots. This is managed via the setFplot_dict()
function, which lets us map cryptic/gosh awful or technical variable names to more readable labels that will appear in all subsequent plots.
Example usage:
setFplot_dict(c( Origin = "Exporting Country", Destination = "Importing Country", Euros = "Exports Value in €", jnl_top_25p = "Pub. in Top 25% journal", jnl_top_5p = "Publications in Top 5% journal", journal = "Journal", institution = "U.S. Institution", Petal.Length = "Petal Length"))
The typical workflow with fplot is straightforward:
setFplot_dict()
.fplot
function on your variable(s) of interest.The same function call can yield different types of plots depending on the data provided, streamlining the process of distributional analysis and visualization.
A gallery of examples and a more detailed walk-through are available on the package’s website.
Color Everything in CSS
The CSS-Tricks article “Color Everything in CSS” offers a comprehensive, up-to-date exploration of how color works in CSS, moving beyond just the basics of color and background-color to cover the deeper technical landscape of color on the web. The article introduces essential concepts like color spaces, color models, and color gamuts, which are foundational for understanding how colors are represented, manipulated, and rendered in browsers today.
We’ve covered many of these individual topics before, but this is a well-crafted, all-in-one that does such a good job, I do not wish to steal any thunder from it. Head on over for to level up your CSS skills.
FIN
Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:
@dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev
https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy
oops #til to use #duckdb to query a CSV, generate date ranges, use windowing functions to backfill data and pivot functions to make data that you can easily graph in a spreadsheet.
based upon;
- average solar radiation distribution over the year for my area
- My actual kwh production and usage for the last month (which #homeassistant gives as data change events, not hourly or daily reporting)
- The KWHs I've spent on AC that I expect to increase over the summer
I'm operating at 85% capacity
Since we can't use the cloud to automate our #EHR analysis projects , we tried #nextflow (traditionally used in bioinformatics https://www.nextflow.io/), and it worked as a charm coordinating #duckdb #R #python and within node tasks.
Next in line is (R)?ex https://www.rexify.org/
#duckdb now has the ability to run commands from serialized formats line #json or #CSV. Already have use for this.
Latest release is simply packed with improvements.
https://duckdb.org/2025/05/21/announcing-duckdb-130.html
Easily obtain OSM and OMF data: #Python and CLI tools #QuackOSM and #OvertureMaestro offer easier access to data from #OpenStreetMap (#OSM) and the Overture Maps Foundation (#OMF) through #PyArrow, #GeoParquet, or #DuckDB. These tools can simplify large-scale geospatial data...
https://spatialists.ch/posts/2025/05-23-easily-obtain-osm-and-omf-data/ #GIS #GISchat #geospatial #SwissGIS
New #DuckDB release 1.3.0
️
Looks like a solid release, including better compression of strings. #SQL #Analytics
https://duckdb.org/2025/05/21/announcing-duckdb-130.html
https://github.com/daniel-j-h/zindex this little cloud-native spatial index made it into #WeeklyOSM
With DuckDB-WASM spatial queries against a single static file are a thing!! What!!
Example:
Reading those comments on HN regarding #DuckDB and ease of use with geospatial data, I am reminded of a #QGIS feature not a lot of people seem to be aware of:
Virtual layers allow you to query any supported dataset in QGIS using Spatialite SQL syntax.
So if you are working with geospatial, you may already have QGIS installed and don't even need DuckDB to run spatial SQL without further DB setup.
Instant SQL for results as you type in DuckDB UI
https://motherduck.com/blog/introducing-instant-sql/
#ycombinator #duckdb #snippets #duckdb_snippets #duckdb_snippets_com #Instant #SQL #here #Speedrun #ad_hoc #queries #you #type #MotherDuck #Blog
Abusing DuckDB-WASM by making SQL draw 3D graphics (Sort Of)
https://www.hey.earth/posts/duckdb-doom
#ycombinator #duckdb #sql #wasm #doom
howdy, #hachyderm!
over the last week or so, we've been preparing to move hachy's #DNS zones from #AWS route 53 to bunny DNS.
since this could be a pretty scary thing -- going from one geo-DNS provider to another -- we want to make sure *before* we move that records are resolving in a reasonable way across the globe.
to help us to do this, we've started a small, lightweight tool that we can deploy to a provider like bunny's magic containers to quickly get DNS resolution info from multiple geographic regions quickly. we then write this data to a backend S3 bucket, at which point we can use a tool like #duckdb to analyze the results and find records we need to tweak to improve performance. all *before* we make the change.
then, after we've flipped the switch and while DNS is propagating -- -- we can watch in real-time as different servers begin flipping over to the new provider.
we named the tool hachyboop and it's available publicly --> https://github.com/hachyderm/hachyboop
please keep in mind that it's early in the booper's life, and there's a lot we can do, including cleaning up my hacky code.
attached is an example of a quick run across 17 regions for a few minutes. the data is spread across multiple files but duckdb makes it quite easy for us to query everything like it's one table.
Thank you to #Kone & Mai and Tor Nessling Foundations for supporting this work. A quantitative work like this would not be possible without a robust suite of FOSS tools. My thanks to the maintainers of #QGIS, #pandas, #geopandas, #duckdb, #dask, #statsmodels, #jupyter and many more!
Wow.
#DuckDB just got a built-in #UI since 1.2.1.
In the screenshot, I am running it on my photovoltaics database.
Because I have all interesting reports written as views, I now have a thing that I just can give out to non-programmers that can do cool stuff with it.
Well done, @hannes and team at @motherduck <3
Update: Read their blog - https://duckdb.org/2025/03/12/duckdb-ui
Satellite Imagery You Can Play With
--
https://hackaday.com/2025/03/10/satellite-imagery-you-can-play-with/ <-- shared technical article
--
https://tech.marksblogg.com/satellogic-open-data-feed.html <-- shared ‘how to’ article
--
[this post should not be considered an endorsement of any product or service]
#GIS #spatial #mapping #Satellogic #EarthView #satellite #cubesat #NewSat #microsatellites #GDAL #python #DuckDB #H3 #JSON #Lindel #Parquet #remotesensing #earthobservation #Aleph #NORAD #location #howto #tutorial #code #coding #metadata #imagery