Discussion
Nobody ever got fired for using a struct
SoftTalker: > But SQL schemas often look like this. Columns are nullable by default, and wide tables are common.Hard disagree. That database table was a waving red flag. I don't know enough/any rust so don't really understand the rest of the article but I have never in my life worked with a database table that had 700 columns. Or even 100.
gz09: Hi, I'm the author of the article, someone just made me aware that the article made it here.As to your hard disagree, I guess it depends... While this particular user is on the higher end (in terms of columns), it's not our only user where columns are huge. We see tables with 100+ columns on a fairly regular basis especially when dealing with larger enterprises.
arcrwlock: Why not use a struct of arrays?https://en.wikipedia.org/wiki/Data-oriented_design
holden_nelson: https://jimmyhmiller.com/ugliest-beautiful-codebase
unclad5968: It might not be common in typical software shops. I work in manufacturing and our database has multiple tables with hundreds of columns.
ambicapter: What's in them?
Mikhail_Edoshin: I saw tables with more than a thousand columns. It was a law firm home-grown FileMaker tool. Didn't inspect it too closely, so don't know what was insideI remember a phrase from one of C. J. Date's books: every record is a logical statement. It really stood out for me and I keep returning to it. Such an understanding implies a rather small number of fields or the logical complexity will go through the roof.
mustache_kimono: > Why not use a struct of arrays?I would assume because then the shape of the data would be too different? SOAs is super effective when it suits the shape of the data. Here, the difference would be the difference between an OLTP and OLAP DB. And you wouldn't use an OLAP for an OLTP workload?
pizza-wizard: I’m working on migrating an IBM Maximo database from the late 90s to a SQL Server deployment on my current project. Also charged with updating the schema to a more maintainable and extensible design. Manufacturing and refurbishing domain - 200+ column tables is the norm. Very demoralizing.
jayanmn: Property1 to 20 or more is an example. There are better ways to do it but I have seen columns for storing ‘anything’
bobson381: This is like the functional ugly tier of buildings from "how buildings learn". Excellent stuff
bananamogul: That statement jumped out at me as well. I've worked as a DBA on tons of databases backing a wide variety of ERPs, web apps, analytics, data warehouses...700 columns?!? No.
roblh: I kinda love this. That sounds like an incredibly entertaining place to work for between 1 and 2 years in your late 20s and not a second longer.
astrostl: I have mixed feelings about it, but I'm going to fire somebody tomorrow for using a struct just to prove a point to the author.
unclad5968: Data from measurement tools. Everything about the tool configuration, time of measurement, operator ID, usually a bunch of electrical data (we make laser diodes) like current, potential, power, and a bunch of emission related data.
SigmundA: Looks like they just recreated a tuple layout in rust with null bit map and everything, next up would be storing them in pages and memmap the pages.https://www.postgresql.org/docs/current/storage-page-layout....
dyauspitr: No one has written a struct in 10 years.
vharuck: https://apps.naaccr.org/data-dictionary/data-dictionary/vers...771 columns (and I've read the definitions for them all, plus about 50 more that have been retired). In the database, these are split across at least 3 tables (registry, patient, tumor). But when working with the records, it's common to use one joined table. Luckily, even that usually fits in RAM.
wombatpm: Not everyone understands normal form, much less 3rd normal form. I’ve seen people do worse with excel files where they ran out of columns and had to link across spreadsheets.
randallsquared: I have seen tables (SQL and parquet, too) that have at least high hundreds of optional columns, but this was always understood to be a terrible hack, in those cases.
duc_minh: > Sometimes the best optimization is not a clever algorithm. Sometimes it is just changing the shape of the data.This is basically Rob Pike's Rule 5: If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident.(https://users.ece.utexas.edu/~adnan/pike.html)
gz09: Point them to us https://github.com/feldera/feldera -- we are hiring ;)
gz09: Absolutely, it's a very common technique :)I wasn't sure about writing the article in the first place because of that, but I figured it may be interesting anyways because I was kind of happy with how simple it was to write this optimization when it was all done (when I started out with the task I wasn't sure if it would be hard because of how our code is structured, the libraries we use etc.). I originally posted this in the rust community, and it seems people enjoyed the post.
sublinear: Can you clarify which knowledge domains those enterprises fall under with examples of what problems they were trying to solve?If it's not obvious, I agree with the hard disagree.
woah: No idea what these guys do exactly but their tagline says "Feldera's award-winning incremental compute engine runs SQL pipelines of any complexity"So it sounds like helping customers with databases full of red flags is their bread and butter
gz09: > it sounds like helping customers with databases full of red flags is their bread and butterYes that captures it well. Feldera is an incremental query engine. Loosely speaking: it computes answers to any of your SQL queries by doing work proportional to the incoming changes for your data (rather than the entire state of your database tables).If you have queries that take hours to compute in a traditional database like Spark/PostgreSQL/Snowflake (because of their complexity, or data size) and you want to always have the most up-to-date answer for your queries, feldera will give you that answer 'instantly' whenever your data changes (after you've back-filled your existing dataset into it).There is some more information about how it works under the hood here: https://docs.feldera.com/literature/papers
jimbokun: They’re pretty popular in Go?
jeswin: I wouldn't give too much credit to rules like this. Data structures are often created with an approach in mind. You can't design a data structure without knowing how you will use it.If anything it's the other way round, if you're not talking about business domain modeling (where data structures first is a valid approach).
sublinear: If you don't know enough to design a data structure, requirements are missing and someone talking to the client is dropping the ball big time.
amluto: There are many systems that take a native data structure in your favorite language and, using some sort of reflection, makes an on-disk structure that resembles it. Python pickles and Java’s serialization system are infamous examples, and rkyv is a less alarming one.I am quite strongly of the opinion that one should essentially never use these for anything that needs to work well at any scale. If you need an industrial strength on-disk format, start with a tool for defining on-disk formats, and map back to your language. This gives you far better safety, portability across languages, and often performance as well.Depending on your needs, the right tool might be Parquet or Arrow or protobuf or Cap’n Proto or even JSON or XML or ASN.1. It’s probably not C structs or pickles or some other language’s idea of pickles or even a really cool library that makes Rust do this.(OMG I just discovered rkyv_dyn. boggle. Did someone really attempt to reproduce the security catastrophe that is Java deserialization in Rust? Hint: Java is also memory-safe, and that has not saved users of Java deserialization from all the extremely high severity security holes that have shown up over the years. You can shoot yourself in the foot just fine when you point a cannon at your foot, even if the cannon has no undefined behavior.)
tdeck: If you enjoyed this, you'd probably enjoy thedailywtf.com, which is full of stories like that.
kstrauser: And pervasive in Rust.
generativenoise: That is definitely waving the red flag of poor data design. It is doing the all I have is a single level hash table and no real arrays for your records. So you end up doing pseudo arrays like "Path Report Type 1", "Path Report Type 2", ...To me this screams of being constrained to interop with spreadsheets, since that is when I have seen similar things.Do you think working with the wide single level records is better/easier than a more decomposed design?Also out of curiosity, what do they do when there are more than 5 Path Reports? Duplicate record or it just doesn't happen?
linolevan: This is awesome. Got completely lost reading this and was struggling to figure out where I got this link from. Amazing story.
orthoxerox: It's OLAP, it very common for analytical tables to be denormalized. As an example, each UserAction row can include every field from Device and User to maximize the speed at which fraud detection works. You might even want to store multiple Devices in a single row: current, common 1, 2 and 3.
jeswin: Where did I say any of that?I'm saying that if you care about performance, data structures should be designed with approach specific tradeoffs in mind. And like I've said above, in typical business apps, it's ok to start with data structures because (a) performance is usually not a problem, (b) staying close to the domain is cleaner.
everyone: Just cus structs and classes work differently, and classes are much more common. I tend to make everything a class, unless there is a really good reason to make it a struct.
gz09: > Depending on your needs, the right tool might be Parquet or Arrow or protobuf or Cap’n ProtoI think parquet and arrow are great formats, but ultimately they have to solve a similar problem that rkyv solves: for any given type that they support, what does the bit pattern look like in serialized form and in deserialized form (and how do I convert between the two).However, it is useful to point out that parquet/arrow on top of that solve many more problems needed to store data 'at scale' than rkyv (it is just a serialization framework after all): well defined data and file format, backward compatibility, bloom filters, run length encoding, compression, indexes, interoperability between languages, etc. etc.
saghm: I feel like I'm missing something, but the article started by talking about SQL tables, and then in-memory representations, and then on-disk representation, but...isn't storing it on a disk already what a SQL database is doing? It sounds like data is being read from a disk into memory in one format and then written back to a disk (maybe a different one?) in another format, and the second format was not as efficient as the first. I'm not sure I understand why a third format was even introduced in the first place.