What 6AM On A Sunday Looks Like
" It's an interface engine, and that's the data manipulation screen I'm showing off. Basically it's job is to act as a conductor for large amounts of data flowing in thousands of different directions. It can intake, manipulate, and outtake about 3 million messages a day. Anything that has a standard for interoperability - SNOMED, DICOM, HL7, EDI, EDIFACTS, X12, etc. |
![]() |
so.. virtual routing
|
![]() |
I'm familiar with bullshit binary protocols since I had to spend quite a bit of time working with FIX.
I wouldn't wish them on my worst enemy. I also know all about how standard their standards are, with each actor subtly fucking with the standard because they didn't know any better or because they think no one will ever notice. They're a relic from a bygone era when people's time was less expensive than data storage or transmission. Parsing messages into structured data is a solved problem with XML, JSON, and YAML immediately coming to mind in order of descending verbosity. Hell, I'm pretty sure there's an XML parser library for my toaster. This is where ad-hoc binary protocols instead of an actual standard really bites. Once parsed, outputting any arbitrary message via transforms on a data structure is pretty simple, and since the work is embarassingly parallel, transforming and sending those messages is also pretty simple and scales linearly with the number of machines you throw at it. 3m messages/day is 34 messages/second, or 29ms per "parse, transform, and send". I don't know what hardware you're using, but that doesn't strike me as blazingly fast. I don't mean to demean, if it's coming off that way. Just looking at it makes me shudder, and it's why I ran far, far away from legacy binary protocols at the first opportunity. I'm also keenly aware of how much money there is in legacy medical systems, but it's for someone with much more... intestinal fortitude than I possess. |
![]() |
I agree, Pneuma.
but sometimes, people just have a habit of sticking to the old ways. and you can bet there's a lot of money involved in this decision. both in existing contracts, and the approximate (huge) cost of moving to a more modern, standardized system. also, I think one of the key reasons is security. you don't want a message containing sensitive medical information, to be easily parse-able by whoever-might-want-to-intercept-it. Alva: I'm sweating like a hog in heat Shadow: That was fun Last edited by johnKeys#6083 on Oct 7, 2013, 2:51:42 AM
|
![]() |
" That's just security through obscurity, and even then, it's not obscure because the binary protocol is publicly available. It's entirely to do with legacy systems and the fear of change. Big gears that are already in motion and all that crap. I'm extremely familiar with b2b and it bothers me severely knowing that they're throwing away money. Optimizing this kind of multi-decade-old buffoonery is technically my job. Last edited by pneuma#0134 on Oct 7, 2013, 3:13:18 AM
|
![]() |
" I'm not particularly familiar with EDI, but FIX looks like a precursor to it - EDI is what most financial institutions now use AFAIK. " I will agree with you here, but I usually hold the other party responsible for making sure that what I'm trying to send them, or what I'm receiving is as close to the standard as possible. Most of the time it's an education issue with the other party. " There is an XML version of HL7, HL7 v3.x - and I absolutely hate it. What I can put into a length encoded text message using the HL7 v2.x standard on about 50 lines, takes like 600-700 lines of XML(not sure if you can confusicate XML, never tried, but then again it wouldn't be readable to begin with). When you're having to view these messages to figure out where someone fucked up, or what an issue is - it's just not worth my time. Which is most likely the reason that the healthcare industry pushed HL7 v3.x aside and continues to use HL7 v2.x - XML messages are just way too big to be convenient to read when having to do it manually. Having a message that's confined down to a space that fits my screen is much more convenient and I'd rather have a bitch of a time coding the manipulation once, then having to parse through lines and lines of meaningless XML tags to find the data that I need - and there again lots of times the whole message needs to be viewable at once to possibly determine the problem. " I'm not actually sure what the max capabilities of this engine are. About 3 million messages per day is the highest I've even seen cross through it, and that was taking about 40 hospital networks data into one engine. These are natively built for RHEM linux servers. " I find it fun, but everyone has there thing - legacy systems are crazy entertaining to mess around with for me. The previous company I worked for still used IBM iSeries Servers/Mainframes that I got to play around with :) Last edited by Elynole#2906 on Oct 7, 2013, 4:44:38 AM
|
![]() |