1.
Teleforking a Process onto a Different Computer
2.
Show HN: Gmail CLI Utils (bulk delete mail by query, get/create filters)
Comment by adamfeldman:
For "Declarative configuration for Gmail filters", see also https://github.com/mbrt/gmailctl
3.
The third wave of open source migration
Comment by jamesblonde:
https://twitter.com/wdaali999/status/1161973951565881345?lan...
Basically, anything not open-source is not cool any more - SAS, matlab, SPSS. Kids are not learning these frameworks in school and don't want to use them. I see open-source taking over Data Science by the time this recession is over: Jupyter, conda, Scikit-learn, TensorFlow, PyTorch, RStudio, and even PySpark.
Comment by msoad:
When winter comes, those projects don't make sense anymore because cost cutting measures are in mandate. The same leader might even make the case for the Open Source alternative.
I've seen this enough times to know it is a pattern in our industry.
Comment by pinky07:
Last month, we lost a big project against SAP (budget 5m€): the company choosed SAP because their holding was willing to pay for the project. Last week, the same prospect came back to Odoo: as the holding could not afford such a project anymore, the company has to pay from its own budget. So, they choose Odoo (<1m€ budget)
I believe the next wave is a replacement of proprietary expensive business applications: ERP, SAS, BI...
Comment by mooreds:
It is important to recognize the value of developer time too, though. There's a cost in dev time for setting up a "free" project.
That's why I think that any open source project that gets too popular will have to have a cloud vendor strategy, otherwise they'll get done to them what AWS did to Elastic Search.
I also thought it was interesting that the author mentioned support for the various application libraries. I know that there have been several "tip" type applications (gittip, gitcoin.co) that try to align incentives and allow open source developers to make a living.
Comment by prepend:
I think using these packages and projects requires more due diligence and planning on staff to pick and support, but I think the current highly variable support project by project works out well. And then for big stuff (Linux, Postgres, etc) some commercial support is brought in.
I’d much rather see more support for companies donating developer hours to patches and features. Some way to recognize in kind and labor contributions and expand recognition for these kinds of contributions. I think this works better for software than trying to get every company to pay into some support fund. If you want to pay structured licenses for everyone, there’s a model for that. Trying to shoehorn license fees on top of open source loses a lot of the efficiencies, I think.
Comment by c-smile:
Yet, "the rise of hosted cloud services like AWS, Google Cloud, and Microsoft Azure" is just "anti-pattern" for the subject of the article. Commercial companies that exploit (fuzzy term here but still) OS software.
Comment by mrfusion:
Comment by okram:
Tech is dead.
4.
A-Shell: Terminal for iOS
Comment by thecybernerd:
Comment by GekkePrutser:
What iOS really needs to make this useful though is a way to project to a proper screen and keyboard/mouse configuration. Like Samsung DeX. Kinda hoping this will happen as they are making the iPad Pros more like a computer.
Comment by alexhutcheson:
Is there any documentation on what shell syntax this supports? I assume it's not running a standard shell like Bash or zsh.
Edit: https://github.com/holzschu/ios_system/blob/master/README.md confirms its not running sh, bash, or zsh, and has some additional details on the available commands. I still think it would be nice for this to be more explicit, but the information is out there.
Comment by jlgaddis:
That's extreme overkill for my needs -- I'd just like the ability to log in to a few hosts (preferably using public key authentication!) and run various commands just like I normally do in a terminal.
If anyone has any "favorites" they recommend, I'd be interested in hearing about them. I'd prefer something open-source (out of principle) but I'm certainly not opposed to paying a reasonable amount.
Comment by jedisct1:
A-Shell seems to be very limited and additional packages cannot be installed. What are uses cases for which A-Shell would be a better fit than iSH?
Comment by thesuperbigfrog:
On Android, I love using Termux (https://termux.com/).
If I have a computer in my pocket, I should be able to use it as a computer, not merely a consumption device
Comment by 5-:
iSH uses a completely different approach -- it's a custom x86+linux emulator that runs complete unaltered alpine linux userland: https://ish.app/
Comment by airstrike:
Comment by RodgerTheGreat:
5.
'Expert Twitter' Only Goes So Far – Bring Back Blogs
Comment by nullc:
The audience of people who might read your blog but who aren't stuck on a scroll treadmill is too small to bother, especially with the death of many popular rss readers.
Comment by joelrunyon:
If you don't own your platform, you don't own your content.
Register your domain, install wordpress, start your own blog.
We actually created https://startablog.com to drive this point home, teach people how to do it and even have a team that will do it for you for free if you need the help (lots of people still find the WP install process intimidating).
Comment by andy_ppp:
Comment by notacoward:
Ironically, I think the reason is recognizable from epidemiology. The network of twitter followers is just denser than the network of bloggers ever was or likely ever will be. Even the very best blog posts still tended not to spread even an order of magnitude as well as a good tweet thread. As much as I hate the format, I don't think blogs can or will displace it.
Comment by smitty1e:
Go to Wordpress and get your dog-gone blog on.
Comment by askafriend:
I still like Twitter for getting a broad range of thoughts quickly. I view it as a very rough pulse of public consciousness. Not a research paper or word from God.
Comment by Icathian:
Frankly, that seems like a long way to go for what is effectively twitlonger. If you buy the premise that Twitter is effective at amplifying the right voices (very much still up for debate), and the problem is the interface for long-form content, then it seems to need a much simpler UX fix rather than trying to invent a separate-but-joined platform from whole cloth.
Comment by HugoDaniel:
Comment by pgt:
Comment by asdfadfaf:
Comment by some_furry:
Comment by rado:
Comment by DrNuke:
6.
Creating ad hoc microphone arrays from personal devices (2019)
Comment by crazygringo:
Capturing high-quality audio in a meeting room for videoconferencing is a notoriously complicated problem.
Microphones are crazy sensitive and pick up things like footsteps and conversations outside the door, shuffling feet and tapping on keyboards, and construction and HVAC noise like you wouldn't believe.
So filtering those things out, and then capturing the best quality audio from the current speaker, and trying to get everyone's voice at roughly the same volume whether they're sitting directly across from the microphone or are piping up from the corner of the room...
...and do this all while cancelling 100% of the echo that might be coming from two or three speakers at once...
...it's an insanely hard problem. Beamforming microphones absolutely help in a huge way, because if you know the speaker's voice is coming from 45° then knowing that any sound coming from any other angle can be removed is a really helpful piece of info.
Now, with beamforming microphones, the precise relative location and direction of each mic is known. The idea of creating one big beamforming mic for the room out of people's individual mics is... insanely hard, but super cool.
It's interesting to me that this article is about measuring the quality of voice transcription, rather than about the quality of audio in an actual meeting. But I suppose the voice transcription quality measurement is simply a proxy for the speaker audio quality generally, no?
This could actually be a huge step forward in not needing videoconferencing equipment in meeting rooms. So far, one of the biggest reasons has actually been dealing with echo and feedback -- when people are in the same call with multiple devices in the same room, it tends to end badly. But if the audio processing is designed for that... the results could actually be quite amazing.
And it's well-known that the "bowling alley" visual of meeting participants (camera at the end of a long conference table) isn't ideal. If each participant has their own laptop camera on themselves, it could be a vastly better experience for remote participants.
Comment by pjc50:
The specific improvement Microsoft are touting is blind beamforming, without knowing where the microphones are located relative to each other. Regular beamforming is already in use in some products.
Comment by peter_d_sherman:
"While the idea sounds simple, it requires overcoming many technical challenges to be effective. The audio quality of devices varies significantly. The speech signals captured by different microphones are not aligned with each other. The number of devices and their relative positions are unknown. For these reasons and others, consolidating the information streams from multiple independent devices in a coherent way is much more complicated than it may seem. In fact, although the concept of ad hoc microphone arrays dates back to the beginning of this century, to our knowledge it has not been realized as a product or public prototype so far."
Thoughts:
There's something deep here, not with respect to microphones and speech transcription (although I wish Microsoft and whoever else attempts to wrestle with those problems the greatest of success!)
There's a related deep problem in physics here.
If we consider signals that emanate from outer space, let's say they're from the big bang, or heck, let's just say they're from one of our past-the-edge-of-this-solar-system satelites -- that wants to communicate back to earth.
Well, due to the incredible distances involved, the signal will get garbled in various ways...
So here's the $64,000 question:
When that signal from deep space gets garbled, isn't it possible that it turns into various other signals, at various different other frequencies and wavelengths?
In other words, space itself, over long distances, acts as a prism (not really, but as an easy way to wrap your mind around this concept), for radio, and other electromagnetic waves...
Now, if you want to reconstruct the orignal message at these long distances, you must be able to reconstruct garbled radio (and other em) waves, which are moving at different frequencies, and may even arrive at the destination at different rates of speed with various time shifts...
Basically, you've got to take those pieces -- move them to the correct frequency, time correct them, speed them up or slow them down, sync them, and overlay them -- to reconstruct the original message...
That's the greater question in physics -- the ability to do all of that, with em signals from a long way off in space...
The article referenced -- is the microphone/audio/slow speed equivalent -- of that larger problem...
Comment by itchyjunk:
Think of all those shitty little video clips people take at a concert. Could all those be combined to make some high quality panoramic video? Probably a lot of other cool applications that I can't even comprehend for now. What a time to be alive.
Comment by Zenst:
Gets down to matching a single sound and working out the timing of that sound from the multiple sources. Then you also need to factor in the frequency response as well.
That last part would be important to handle things like the table the devices are sat upon picking up vibrations from the desk. Remember that phones don't have a rubber base to isolate them from the table so any vibration of that surface would propagate into the device and microphone. Then the whole aspect of varying devices and with that, varying microphone quality and device housings. So calibrating at some level would be key for this to work, though doable and processing wise you could even run a master device and handle the processing there and remove the server aspect with some of the processing done upon each local device and passed onto the main device for correlating. Certainly some phones have the power to handle this type of affair to replace the server aspect. But that would be more work/effort and something that may well see later on. Though makes it harder to sell a bit of server processing software then.
Though one test I'd like to see this system handle would be how well it filters out those vibrations.
After all you don't want to hear somebody writing or putting a cup or other object down whilst somebody else is talking.
I'd also wonder what type of jitter tolerances they are working with across those devices and how that scales with devices/jitter - does jitter increase after so many devices.
Comment by :
Comment by geokon:
Comment by stuaxo:
The idea was that at a gig loads of people would record and you could reconstruct a much better recording.
Comment by stragies:
Comment by andrewfromx:
Comment by kohtatsu:
Edit: This would be cool if I trusted Microsoft to properly handle privacy.
7.
C program proofs with Frama-C and its weakest-precondition plugin [pdf]
Comment by ngneer:
Comment by fizixer:
How widely used is it? I've heard of TLA+ being very popular. Then there is Z notation, and about half a dozen or more others.
If there is no standard FSL, do I have to learn a new FSL everytime I want to apply formal methods to a slightly different system I'm working with?
Formal methods is a hard enough area. Lack of standardization makes it harder.
Comment by therein:
Turns out it stands for Weakest Precondition.
8.
Ask HN: What scientific phenomenon do you wish someone would explain better?
Comment by pjungwir:
Comment by qubex:
To wit, the idea is that you cannot distinguish whether you are in an accelerated frame or in a gravitational field; alternatively stated, if you’re floating around in an elevator you don’t know whether you’re freefalling to your doom or in deep sideral space far from any gravitational source (though of course, since you’re in an elevator car and apparently freefalling... I think we’d all agree on what’s most likely, but I digress).
Anyway, what irks me that this is most definitely not true at the “thought experiment” level of theoretical thinking: if you had two baseballs with you in that freefalling lift, you could suspend them in front of you. If you were in deep space, they’d stay equidistant; if you were freefalling down a shaft, you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.
Of course, they’d be moving slightly toward each other in both cases (because they attract gravitationally) but the tidal effect presents is additional and present in only one scenario, allowing one to (theoretically) distinguish, apparently violating the bedrock Equivalence Principle.
I never see this point raised anywhere and I find it quite distressing, because I’m sure there’s a very simple explanation and that General Relativity is sound under such trivial constructions, but I haven’t been able to find a decent explanation.
Comment by phkahler:
Comment by ramboldio:
Comment by Dutchie85:
The well known example that if you travel into space you'd gain let's say 5 years and people on earth 25 in the same time or so.
I just don't get it and I can't find any logic explanation.
For instance: Two twins who came to live exactly at the same moment in the year 2000 and both die on their 75th birthday at the same time. One travels into space, the other stays on earth. Earth-brother dies on earthyear 2075,space-brother dies in earthyear 3050 or so...
I know its Einstein's point but that just doesn't instantly make it correct to me.
Comment by daenz:
Comment by lpellis:
Comment by pjungwir:
Comment by anton_tarasenko:
The StackExchange sites have less coverage and answers tend to be more technical.
University websites return reliable answers, but often neither short nor accessible.
Comment by npr11:
Comment by crosser:
Comment by davidmanheim:
ZK proofs have a number of good explainers, mostly using graph colorings. Non-interactive versions, however, require quite a bit more than that explanation allows - and despite asking experts, I still haven't found a good, basic explanation.
Comment by mynegation:
How immune system and medications work.
Why some plastics are recyclable and others are not.
Comment by pgt:
Comment by vvoyer:
Comment by qqqqquinnnnn:
Comment by VygmraMGVl:
Comment by plurinshael:
Comment by airstrike:
Comment by curiousgal:
9.
Psychological techniques to practice Stoicism
Comment by Svip:
If one should never worry about things that they cannot possibly control, even if it directly affects one's life, because we are just going to cease to exist at some point anyway, how would one now whether or not they could alter it, if they never began worrying? This very idea lead to several prominent Stoics to commit suicide, because might as well hasten my eventual ceasing of being?
Perhaps if they had concerned themselves with things that on the surface seemed outside of their reach, they might have realised that some things are approachable, even if the solution is not obvious.
The idea that one should avoid worry about things outside one's control is not a bad suggestion in general, it just should not be taken as an extreme. I mean, there is probably a reason why philosophers went back to Aristotle and Plato after those other four Schools saw prominence.
Jewish, Christian and Islamic philosophers weren't trying to make their religions compatible with Zeno's or Epicurus' teachings, but rather Plato's and later Aristotle's.
Comment by NegativeLatency:
IIRC: studies have not validated this
Comment by kashyapc:
From my experience of reading multiple translations of the Big Three, for someone new to Stoicism, I'd suggest not to start with the popular recommendation of Marcus Aurelius.
Start with Seneca's Letters, then Epictetus (an ex-slave, and a profound influence on Marcus Aurelius), and only then Marcus Aurelius, the Roman Emperor. (To quote the foremost Stoic scholar, A.A. Long: "[...] That an ex-slave actually shaped a Roman Emperor's deepest thoughts is one of the most remarkable testimonies to the power and applicability of Epictetus' words.")
The quality of the English translation matters a lot. Here's my recommendations:
• Seneca: Letters on Ethics — translation by Margaret Graver and A. A. Long. This is the most recent translation, reads extremely well, outstanding notes, and wonderfully typeset. It's done by the current foremost experts; can't get better than this. I've been reading this for four months. (If this is a tad pricey for you, there also Oxford and Penguin editions of a selection of Seneca's letters.)
• Epictetus: Encheiridion, and Selections from Discourses, by A.A. Long. This is a short book; the value addition here is the great introduction, and the outstanding glossary. (NB: there is no escaping full Discourses of Epictetus—refer below.)
• Epictetus: Discourses, Fragments and Handbook — translation by Robin Hard, intro by Christopher Gill; Oxford University Press. Spend a good four months immersing yourself in it. Epictetus is full of heavy irony, dark humor, histrionic wit, and sarcasm. Absolutely my favourite.
• Epictetus: A Stoic and Socratic Guide, by A.A. Long. (Important Note: To get maximum value out of this, you must have already read at least one translation of Epictetus' full Discourses! This book orients the reader to Epictetus with an extremely valuable context: how not to misinterpret his unqualified faith in "divine providence" (which can grate on our "modern ears"); the influence of Plato and the "Socratic Elenchus"" (colloquially known as "Socratic Method"); deep insights into Epictetus' own inimitable style; and a rich bibliography.
• Marcus Aurelius: Meditations. There are at least six translations. I'd suggest to start with the gentler translation by Gregory Hays. If you like it, then you can research other translations. (A.S.L Farquharson spent a lifetime on his translation of the Meditations; it also has commentary. I sometimes consult this edition.)
• The Inner Citadel: The Meditations of Marcus Aurelius, by Pierre Hadot. This needs to be read only after you've read at least one translation of Marcus Aurelius This is a fantastic dissection of Aurelius' work—Hadot studied him for 25 years. Besides fresh translations of the Meditations, it also contains unparalleled summary of Epictetus, and many quotes of Seneca.
Comment by a-saleh:
Comment by troughway:
I want psychologists to write a solid book on this subject.
So far majority of posts that I have read on this subject have been by software developers, which strikes me bizarre. You can leave your Zen and Art of Motorcycle Maintenance out of this.
Software developers are not qualified to write about this. They are clueless dingbats and don't know it.
How about some proper sources on the subject?
Comment by exit:
humans weren't designed, they were selected for through a happenstantial process. random mutation is critical to this process, and so any one of us can be deeply at odds with whatever the majority are geared towards.
Comment by westurner:
Meditations (Marcus Aurelius) https://en.wikipedia.org/wiki/Meditations
10.
Decrypt WhatsApp encrypted media files
Comment by natch:
> A recent high-profile forensic investigation reported that “due to end-to-end encryption employed by WhatsApp, it is virtually impossible to decrypt the contents of the downloader [.enc file]
This quote clearly means it is virtually impossible without the key. OF COURSE if you have full access to the device as a logged in user, then you can get access to the key and decrypt things that cannot be decrypted by others who do not have the key. Nothing to see here.
At least to the author’s credit the FAQ answers below clarify this, but not after the lead in, which is all most people read, has already done the damage of dramatically planting the incorrect impression that someone has figured out how to break WhatsApp encryption.
Comment by sloshnmosh:
Comment by Funes-:
Comment by smashah:
Every needs to be incredibly careful of being phished of their WhatsApp web logins.
Also, WhatsApp does not respect message integrity regardless of e2e encryption. They WILL mutate your message if required.
Comment by ignoramous:
And here's a desktop viewer to search through decrypted files: https://forum.xda-developers.com/showthread.php?t=1583021 (last updated: 2018).
Comment by jl6:
When you export a chat, you get a zip file containing the messages as plain text, plus any media files referenced in the chat. The .txt file unfortunately only contains the text-only messages, not the text captions for media items. I reported this as a bug and was told this was functioning as intended.
So this is a warning to anyone who thinks they are backing up their WhatsApp chats via the export feature that their backups are incomplete.
As a workaround, you can get hold of the ChatStorage.sqlite file from an iTunes backup of your phone. All text is in there but you obviously have to query the database and format it into a readable sequence of messages.
This really, really sucks as a workflow and I hope if any WhatsApp engineers ever read this they start working on a real export feature.
Comment by seemslegit:
Didn't we learn that not to be the case since presumably the device can still flash a new apple-signed firmware that would override this ?
Comment by pier25:
Answering the most important questions.
Comment by dancemethis:
Comment by hawski:
https://www.dragonflybsd.org/cgi/web-man?command=sys_checkpo...
https://www.dragonflybsd.org/cgi/web-man?command=checkpoint&...
Comment by synack:
MPI also comes to mind, but it's more focused on the IPC mechanisms.
I always liked Plan 9's approach, where every CPU is just a file and you execute code by writing to that file, even if it's on a remote filesystem.
Comment by userbinator:
That's what "live migration" does; it can be done with an entire VM: https://en.wikipedia.org/wiki/Live_migration
Comment by ISL:
QNX had a really cool way of doing inter-process communication over the LAN that worked as if it were local. Used it in my first lab job in 2001. You might not find it on the web, though. The API references were all (thick!) dead trees.
Edit: Looks like QNX4 couldn't fork over the LAN. It had a separate "spawn()" call that could operate across nodes.
https://www.qnx.com/developers/docs/qnx_4.25_docs/qnx4/sysar...
Comment by Animats:
A big part of the problem is "fork", which is a primitive designed to work on a PDP-11 with very limited memory. The way "fork" originally worked was to swap out the process, and instead of discarding the in-memory copy, duplicate the process table entry for it, making the swapped-out version and the in-memory version separate processes. This copied code, data, and the process header with the file info. This is a strange way to launch a new process, but it was really easy to implement in early Unix.
Most other systems had some variant on "run" - launch and run the indicated image. That distributes much better.
Comment by fitzn:
A somewhat related project is the PIOS operating system written 10 years ago but still used today to teach the operating systems class there. The OS has different goals than your project but it does support forking processes to different machines and then deterministically merging their results back into the parent process. Your post remind me of it. There's a handful of papers that talks about the different things they did with the OS, as well as their best paper award at OSDI 2010.
https://dedis.cs.yale.edu/2010/det/
Comment by dekhn:
I believe people have found other ways to do this, personally I think the ECS model (like k8s, but the cloud provider hosts the k8s environment) where the user packages up all the dependencies and clearly specifies the IO mechanisms through late biniding, makes a lot more sense for distributed computing.
Comment by peterkelly:
Comment by abotsis:
Comment by YesThatTom2:
Comment by londons_explore:
A rsync-like diff algorithm might also substantially reduce copied pages if the same or a similar process is teleforked multiple times.
Many processes have a lot of memory which is never read or written, and there's no reason that should be moved, or at least no reason it should be moved quickly.
Using that, you ought to be able to resume the remote fork in milliseconds rather than seconds.
userfaultfd() or mapping everything to files on a FUSE filesystem both look like promising implementation options.
Comment by jka:
The idea, in abstract, is that you login to an environment where you can list running processes, perform filesystem I/O, list and create network connections, etc -- and any and all of these are in fact running across a cluster of distributed machines.
(in a trivial case that cluster might be a single machine, in which case it's essentially no different to logging in to a standalone server)
The wikipedia page referenced has a good description and a list of implementations; sadly the set of {has-recent-release && is-open-source && supports-process-migration} seems empty.
[1] - https://en.wikipedia.org/wiki/Single_system_image
Comment by dreamcompiler:
[0] https://en.wikipedia.org/wiki/Telescript_(programming_langua...
[1] Yes I know Erlang exists. I wish more people would use it.
Comment by new_realist:
Comment by saagarjha:
Comment by anthk:
If the remote FS is a diff arch, I'd should be able to run the same binary remotely as a fallback option, seamless.
Comment by carapace:
Don't get me wrong, this is great hacking and great fun. And this is a good point:
> I think this stuff is really cool because it’s an instance of one of my favourite techniques, which is diving in to find a lesser-known layer of abstraction that makes something that seems nigh-impossible actually not that much work. Teleporting a computation may seem impossible, or like it would require techniques like serializing all your state, copying a binary executable to the remote machine, and running it there with special command line flags to reload the state.
Comment by crashdelta:
Comment by lachlan-sneff:
Comment by cecilpl2: