• 1.

    Teleforking a Process onto a Different Computer

    • Comment by hawski:

      Seriously cool. That also reminds me of DragonFlyBSD's process checkpointing feature that offers suspend to disk. In Linux world there were many attempts, but AFAIK nothing simple and complete enough. To be fair I don't know if DF's implementation is that either.

      https://www.dragonflybsd.org/cgi/web-man?command=sys_checkpo...

      https://www.dragonflybsd.org/cgi/web-man?command=checkpoint&...

    • Comment by synack:

      This reminds me of OpenMOSIX, which implemented a good chunk of POSIX in a distributed fashion.

      MPI also comes to mind, but it's more focused on the IPC mechanisms.

      I always liked Plan 9's approach, where every CPU is just a file and you execute code by writing to that file, even if it's on a remote filesystem.

    • Comment by userbinator:

      This can let you stream in new pages of memory only as they are accessed by the program, allowing you to teleport processes with lower latency since they can start running basically right away.

      That's what "live migration" does; it can be done with an entire VM: https://en.wikipedia.org/wiki/Live_migration

    • Comment by ISL:

      What's old is new again -- I'm pretty sure QNX could do this in the 1990s.

      QNX had a really cool way of doing inter-process communication over the LAN that worked as if it were local. Used it in my first lab job in 2001. You might not find it on the web, though. The API references were all (thick!) dead trees.

      Edit: Looks like QNX4 couldn't fork over the LAN. It had a separate "spawn()" call that could operate across nodes.

      https://www.qnx.com/developers/docs/qnx_4.25_docs/qnx4/sysar...

    • Comment by Animats:

      That goes back to the 1980s, with UCLA Locus. This was a distributed UNIX-like system. You could launch a process on another machine and keep I/O and pipes connected. Even on a machine with a different CPU architecture. They even shared file position between tasks across the network. Locus was eventually part of an IBM product.

      A big part of the problem is "fork", which is a primitive designed to work on a PDP-11 with very limited memory. The way "fork" originally worked was to swap out the process, and instead of discarding the in-memory copy, duplicate the process table entry for it, making the swapped-out version and the in-memory version separate processes. This copied code, data, and the process header with the file info. This is a strange way to launch a new process, but it was really easy to implement in early Unix.

      Most other systems had some variant on "run" - launch and run the indicated image. That distributes much better.

    • Comment by fitzn:

      Really cool idea! Thanks for providing so much detail in the post. I enjoyed it.

      A somewhat related project is the PIOS operating system written 10 years ago but still used today to teach the operating systems class there. The OS has different goals than your project but it does support forking processes to different machines and then deterministically merging their results back into the parent process. Your post remind me of it. There's a handful of papers that talks about the different things they did with the OS, as well as their best paper award at OSDI 2010.

      https://dedis.cs.yale.edu/2010/det/

    • Comment by dekhn:

      Condor, a distributed computing environment, has done IO remoting (where all calls to IO on the target machine get sent back to the source) for several decades. The origin of Linux Containers was process migration.

      I believe people have found other ways to do this, personally I think the ECS model (like k8s, but the cloud provider hosts the k8s environment) where the user packages up all the dependencies and clearly specifies the IO mechanisms through late biniding, makes a lot more sense for distributed computing.

    • Comment by peterkelly:

      There's been a bunch of interesting work done on this over the years. Here's a literature survey on the topic: https://dl.acm.org/doi/abs/10.1145/367701.367728
    • Comment by abotsis:

      Also of interest might be Sprite- a Berkeley research os developed “back in the day” by Ken Shirriff And others. It boasted a lot of innovations like a logging filesystem (not just metadata) and a distributed process model and filesystem allowing for live migrations between nodes. https://www2.eecs.berkeley.edu/Research/Projects/CS/sprite/s...
    • Comment by YesThatTom2:

      Condor did this in the early 90s.
    • Comment by londons_explore:

      Bonus points if you can effectively implement the "copy on write" ability of the linux kernel to only send over pages to the remote machine that are changed either in the local or remote fork, or read in the remote fork.

      A rsync-like diff algorithm might also substantially reduce copied pages if the same or a similar process is teleforked multiple times.

      Many processes have a lot of memory which is never read or written, and there's no reason that should be moved, or at least no reason it should be moved quickly.

      Using that, you ought to be able to resume the remote fork in milliseconds rather than seconds.

      userfaultfd() or mapping everything to files on a FUSE filesystem both look like promising implementation options.

    • Comment by jka:

      This reminds me a little bit of the idea of 'Single System Image'[1] computing.

      The idea, in abstract, is that you login to an environment where you can list running processes, perform filesystem I/O, list and create network connections, etc -- and any and all of these are in fact running across a cluster of distributed machines.

      (in a trivial case that cluster might be a single machine, in which case it's essentially no different to logging in to a standalone server)

      The wikipedia page referenced has a good description and a list of implementations; sadly the set of {has-recent-release && is-open-source && supports-process-migration} seems empty.

      [1] - https://en.wikipedia.org/wiki/Single_system_image

    • Comment by dreamcompiler:

      Telescript [0] is based on this idea, although at a higher level. I wish we could just build Actor-based operating systems and then we wouldn't need to keep reinventing flexible distributed computation, but alas...[1]

      [0] https://en.wikipedia.org/wiki/Telescript_(programming_langua...

      [1] Yes I know Erlang exists. I wish more people would use it.

    • Comment by new_realist:

      See https://criu.org/Live_migration
    • Comment by saagarjha:

      It’s touched on at the very end, but this kind of work is somewhat similar to what the kernel needs to do on a fork or context switch, so you can really figure out what state you need to keep track of from there. Once you have that, scheduling one of these network processes isn’t really all that different than scheduling a normal process, except the of course syscalls on the remote machine will possibly go to a kernel that doesn’t know what to do with them.
    • Comment by anthk:

      What I'd love it's bingding easily remote directories as local. Not NFS, but a braindead 9p. If I don't have a tool, I'd love to have a bind-mount of a directory from a stranger, and run a binarye from within (or piping it) without he being able to trace the I/O.

      If the remote FS is a diff arch, I'd should be able to run the same binary remotely as a fallback option, seamless.

    • Comment by carapace:

      "Somebody else has had this problem."

      Don't get me wrong, this is great hacking and great fun. And this is a good point:

      > I think this stuff is really cool because it’s an instance of one of my favourite techniques, which is diving in to find a lesser-known layer of abstraction that makes something that seems nigh-impossible actually not that much work. Teleporting a computation may seem impossible, or like it would require techniques like serializing all your state, copying a binary executable to the remote machine, and running it there with special command line flags to reload the state.

    • Comment by crashdelta:

      This is one of the best side projects I've ever seen, hands down.
    • Comment by lachlan-sneff:

      Wow, this is really interesting. I bet that there's a way of doing this robustly by streaming wasm modules instead of full executables to every server in the cluster.
    • Comment by cecilpl2:

      This is similar to what Incredibuild does. It distributes compile and compute jobs across a network, effectively sandboxing the remote process and forwarding all filesystem calls back to the initiating agent.
  • 2.

    Show HN: Gmail CLI Utils (bulk delete mail by query, get/create filters)

    • Comment by adamfeldman:

      This looks useful!

      For "Declarative configuration for Gmail filters", see also https://github.com/mbrt/gmailctl

  • 3.

    The third wave of open source migration

    • Comment by jamesblonde:

      This is a hilarious video that captures the state-of-play of open-source frameworks in Data Science:

      https://twitter.com/wdaali999/status/1161973951565881345?lan...

      Basically, anything not open-source is not cool any more - SAS, matlab, SPSS. Kids are not learning these frameworks in school and don't want to use them. I see open-source taking over Data Science by the time this recession is over: Jupyter, conda, Scikit-learn, TensorFlow, PyTorch, RStudio, and even PySpark.

    • Comment by msoad:

      If you work for a big engineering organization you will find yourself questioning things that are being built a lot. Open Source alternatives often have higher quality and better support, yet engineering organizations opt to build their own. You might wonder why? It's because growth is the name of the game. Any engineering leader wants bigger and bigger organization under her/him so what they do is they green light projects that mostly don't make sense but they can pretty much lie about how essential it is to build in house to the board/CEO.

      When winter comes, those projects don't make sense anymore because cost cutting measures are in mandate. The same leader might even make the case for the Open Source alternative.

      I've seen this enough times to know it is a pattern in our industry.

    • Comment by pinky07:

      We are witnessing this since weeks at Odoo. (https://odoo.com)

      Last month, we lost a big project against SAP (budget 5m€): the company choosed SAP because their holding was willing to pay for the project. Last week, the same prospect came back to Odoo: as the holding could not afford such a project anymore, the company has to pay from its own budget. So, they choose Odoo (<1m€ budget)

      I believe the next wave is a replacement of proprietary expensive business applications: ERP, SAS, BI...

    • Comment by mooreds:

      I think there's substantial value in replacing expensive system components with free alternatives. Things like FusionAuth / https://fusionauth.io/ for user identity (full disclosure, I'm an employee) and Pentaho Kettle https://github.com/pentaho/pentaho-kettle for ETL and data transformations can help.

      It is important to recognize the value of developer time too, though. There's a cost in dev time for setting up a "free" project.

      That's why I think that any open source project that gets too popular will have to have a cloud vendor strategy, otherwise they'll get done to them what AWS did to Elastic Search.

      I also thought it was interesting that the author mentioned support for the various application libraries. I know that there have been several "tip" type applications (gittip, gitcoin.co) that try to align incentives and allow open source developers to make a living.

    • Comment by prepend:

      I think the growth is on a continuing streak and there’s not a need for a RedHat style support for every package.

      I think using these packages and projects requires more due diligence and planning on staff to pick and support, but I think the current highly variable support project by project works out well. And then for big stuff (Linux, Postgres, etc) some commercial support is brought in.

      I’d much rather see more support for companies donating developer hours to patches and features. Some way to recognize in kind and labor contributions and expand recognition for these kinds of contributions. I think this works better for software than trying to get every company to pay into some support fund. If you want to pay structured licenses for everyone, there’s a model for that. Trying to shoehorn license fees on top of open source loses a lot of the efficiencies, I think.

    • Comment by c-smile:

      Could be, but only if companies that are behind OS products will survive by themselves. We hear crack sounds here and there already.

      Yet, "the rise of hosted cloud services like AWS, Google Cloud, and Microsoft Azure" is just "anti-pattern" for the subject of the article. Commercial companies that exploit (fuzzy term here but still) OS software.

    • Comment by mrfusion:

      I’d love to see more open source hardware as the next wave. The ardiuno seems like it’s a success.
    • Comment by okram:

      The third wave of open source software is no software at all. It is only a matter of time before Amazon doesn't care whether it's licensed Apache2 or not. They will just take software and sell it. You have a problem with that? Have fun suing them... Year 1..2..3..oooo. you are quite the fish..4..5. broke. Out of money.

      Tech is dead.

  • 4.

    A-Shell: Terminal for iOS

    • Comment by thecybernerd:

      Is there any way I could use a USB C to DB9 console cable on an iPad Pro with this? I’m thinking this could be a great setup for working in the cramped data center.
    • Comment by GekkePrutser:

      Nice!!!

      What iOS really needs to make this useful though is a way to project to a proper screen and keyboard/mouse configuration. Like Samsung DeX. Kinda hoping this will happen as they are making the iPad Pros more like a computer.

    • Comment by alexhutcheson:

      This is extremely cool.

      Is there any documentation on what shell syntax this supports? I assume it's not running a standard shell like Bash or zsh.

      Edit: https://github.com/holzschu/ios_system/blob/master/README.md confirms its not running sh, bash, or zsh, and has some additional details on the available commands. I still think it would be nice for this to be more explicit, but the information is out there.

    • Comment by jlgaddis:

      Just this morning I realized I had a need for a decent SSH client for iOS, although I don't need something that includes vim, clang, and Lua, Python, and C, though.

      That's extreme overkill for my needs -- I'd just like the ability to log in to a few hosts (preferably using public key authentication!) and run various commands just like I normally do in a terminal.

      If anyone has any "favorites" they recommend, I'd be interested in hearing about them. I'd prefer something open-source (out of principle) but I'm certainly not opposed to paying a reasonable amount.

    • Comment by jedisct1:

      I use iSH, that provides a complete Alpine Linux environment: https://ish.app

      A-Shell seems to be very limited and additional packages cannot be installed. What are uses cases for which A-Shell would be a better fit than iSH?

    • Comment by thesuperbigfrog:

      A-Shell looks very promising. If I were still using iOS, I would install it and try it out in a heartbeat.

      On Android, I love using Termux (https://termux.com/).

      If I have a computer in my pocket, I should be able to use it as a computer, not merely a consumption device

    • Comment by 5-:

      LibTerm is very similar (also based on ios_system): https://libterm.app/

      iSH uses a completely different approach -- it's a custom x86+linux emulator that runs complete unaltered alpine linux userland: https://ish.app/

    • Comment by airstrike:

      Link to git repo https://github.com/holzschu/a-shell
    • Comment by RodgerTheGreat:

      I can see that this tool in turn leverages ios_system, but neither A-shell nor the ios_system github repository appear to present an exhaustive list of the available commands. This seems like an obvious first step in improving documentation.
  • 5.

    'Expert Twitter' Only Goes So Far – Bring Back Blogs

    • Comment by nullc:

      The infinite scroll addiction pipeline has segmented the online world largely into two parts. Drooling scroll zombies on twitter/facebook/reddit/etc, and people who don't partake at all.

      The audience of people who might read your blog but who aren't stuck on a scroll treadmill is too small to bother, especially with the death of many popular rss readers.

    • Comment by joelrunyon:

      I've been on this train for a long, long time.

      If you don't own your platform, you don't own your content.

      Register your domain, install wordpress, start your own blog.

      We actually created https://startablog.com to drive this point home, teach people how to do it and even have a team that will do it for you for free if you need the help (lots of people still find the WP install process intimidating).

    • Comment by andy_ppp:

      Medium had such a great stab at doing this before they completely lost their way. I think someone should make another attempt at this, there doesn’t need to be a billion dollar company here to still get very rich and, more importantly, make something people want...
    • Comment by notacoward:

      The obstacle is not the format but the ease of sharing. Yes, I've had a blog since ~2000, I know there are some ways to share others' content on your own blog, but it never became as easy or readily consumable as a retweet.

      Ironically, I think the reason is recognizable from epidemiology. The network of twitter followers is just denser than the network of bloggers ever was or likely ever will be. Even the very best blog posts still tended not to spread even an order of magnitude as well as a good tweet thread. As much as I hate the format, I don't think blogs can or will displace it.

    • Comment by smitty1e:

      Blogs never left, as a technical matter.

      Go to Wordpress and get your dog-gone blog on.

    • Comment by askafriend:

      You can bring them back, but I don't have time or attention span for 100 blogs.

      I still like Twitter for getting a broad range of thoughts quickly. I view it as a very rough pulse of public consciousness. Not a research paper or word from God.

    • Comment by Icathian:

      The devil is always in the details. Deciding who pays to host new platforms, who gets to gatekeep content, etc ad nauseum, would likely sink any such effort.

      Frankly, that seems like a long way to go for what is effectively twitlonger. If you buy the premise that Twitter is effective at amplifying the right voices (very much still up for debate), and the problem is the interface for long-form content, then it seems to need a much simpler UX fix rather than trying to invent a separate-but-joined platform from whole cloth.

    • Comment by HugoDaniel:

      What is your opinion on dev.to ?
    • Comment by pgt:

      If you are an expert reading this considering writing a blog - whatever you do, please don't start publishing on a paywalled medium like Medium.
    • Comment by asdfadfaf:

      No one has mentioned substack yet. It's a product that seems to be on the right track.
    • Comment by some_furry:

      I just started a blog last week.

      https://soatok.blog

    • Comment by rado:

      Sorry, can't get web publishing advice from Wired. https://i.imgur.com/FaX02Aw.png
    • Comment by DrNuke:

      Imho the fundamental flaw with that (legit) request is that scientific communication is different from, and not meant to replace, proper science... that’s why TwitterStars like the ones that article mentions really shine: they give out the synthesis in a sound manner, and that’s enough?
  • 6.

    Creating ad hoc microphone arrays from personal devices (2019)

    • Comment by crazygringo:

      This is a really interesting technical concept.

      Capturing high-quality audio in a meeting room for videoconferencing is a notoriously complicated problem.

      Microphones are crazy sensitive and pick up things like footsteps and conversations outside the door, shuffling feet and tapping on keyboards, and construction and HVAC noise like you wouldn't believe.

      So filtering those things out, and then capturing the best quality audio from the current speaker, and trying to get everyone's voice at roughly the same volume whether they're sitting directly across from the microphone or are piping up from the corner of the room...

      ...and do this all while cancelling 100% of the echo that might be coming from two or three speakers at once...

      ...it's an insanely hard problem. Beamforming microphones absolutely help in a huge way, because if you know the speaker's voice is coming from 45° then knowing that any sound coming from any other angle can be removed is a really helpful piece of info.

      Now, with beamforming microphones, the precise relative location and direction of each mic is known. The idea of creating one big beamforming mic for the room out of people's individual mics is... insanely hard, but super cool.

      It's interesting to me that this article is about measuring the quality of voice transcription, rather than about the quality of audio in an actual meeting. But I suppose the voice transcription quality measurement is simply a proxy for the speaker audio quality generally, no?

      This could actually be a huge step forward in not needing videoconferencing equipment in meeting rooms. So far, one of the biggest reasons has actually been dealing with echo and feedback -- when people are in the same call with multiple devices in the same room, it tends to end badly. But if the audio processing is designed for that... the results could actually be quite amazing.

      And it's well-known that the "bowling alley" visual of meeting participants (camera at the end of a long conference table) isn't ideal. If each participant has their own laptop camera on themselves, it could be a vastly better experience for remote participants.

    • Comment by pjc50:

      My employer calls this "far field" audio, and has a number of hardware/firmware solutions: https://www.cirrus.com/products/cs48lv41f/ (we're also very secretive, so I can't really discuss it beyond the public website)

      The specific improvement Microsoft are touting is blind beamforming, without knowing where the microphones are located relative to each other. Regular beamforming is already in use in some products.

    • Comment by peter_d_sherman:

      Excerpt:

      "While the idea sounds simple, it requires overcoming many technical challenges to be effective. The audio quality of devices varies significantly. The speech signals captured by different microphones are not aligned with each other. The number of devices and their relative positions are unknown. For these reasons and others, consolidating the information streams from multiple independent devices in a coherent way is much more complicated than it may seem. In fact, although the concept of ad hoc microphone arrays dates back to the beginning of this century, to our knowledge it has not been realized as a product or public prototype so far."

      Thoughts:

      There's something deep here, not with respect to microphones and speech transcription (although I wish Microsoft and whoever else attempts to wrestle with those problems the greatest of success!)

      There's a related deep problem in physics here.

      If we consider signals that emanate from outer space, let's say they're from the big bang, or heck, let's just say they're from one of our past-the-edge-of-this-solar-system satelites -- that wants to communicate back to earth.

      Well, due to the incredible distances involved, the signal will get garbled in various ways...

      So here's the $64,000 question:

      When that signal from deep space gets garbled, isn't it possible that it turns into various other signals, at various different other frequencies and wavelengths?

      In other words, space itself, over long distances, acts as a prism (not really, but as an easy way to wrap your mind around this concept), for radio, and other electromagnetic waves...

      Now, if you want to reconstruct the orignal message at these long distances, you must be able to reconstruct garbled radio (and other em) waves, which are moving at different frequencies, and may even arrive at the destination at different rates of speed with various time shifts...

      Basically, you've got to take those pieces -- move them to the correct frequency, time correct them, speed them up or slow them down, sync them, and overlay them -- to reconstruct the original message...

      That's the greater question in physics -- the ability to do all of that, with em signals from a long way off in space...

      The article referenced -- is the microphone/audio/slow speed equivalent -- of that larger problem...

    • Comment by itchyjunk:

      There are obvious(?) privacy issues and what not here. But ignoring all that for a second, it does sound pretty cool to be able to leverage all the little computers we walk around with.

      Think of all those shitty little video clips people take at a concert. Could all those be combined to make some high quality panoramic video? Probably a lot of other cool applications that I can't even comprehend for now. What a time to be alive.

    • Comment by Zenst:

      Interesting, doable and from my experience of this area, need a reference sound to calibrate, though that calibration could be ongoing for such things like this.

      Gets down to matching a single sound and working out the timing of that sound from the multiple sources. Then you also need to factor in the frequency response as well.

      That last part would be important to handle things like the table the devices are sat upon picking up vibrations from the desk. Remember that phones don't have a rubber base to isolate them from the table so any vibration of that surface would propagate into the device and microphone. Then the whole aspect of varying devices and with that, varying microphone quality and device housings. So calibrating at some level would be key for this to work, though doable and processing wise you could even run a master device and handle the processing there and remove the server aspect with some of the processing done upon each local device and passed onto the main device for correlating. Certainly some phones have the power to handle this type of affair to replace the server aspect. But that would be more work/effort and something that may well see later on. Though makes it harder to sell a bit of server processing software then.

      Though one test I'd like to see this system handle would be how well it filters out those vibrations.

      After all you don't want to hear somebody writing or putting a cup or other object down whilst somebody else is talking.

      I'd also wonder what type of jitter tolerances they are working with across those devices and how that scales with devices/jitter - does jitter increase after so many devices.

    • Comment by :

    • Comment by geokon:

      Does anyone have any insight into why neural nets are used for the "blind" beamforming? I don't have first hand experience with machine learning, but this just doesn't seem to me like a machine learning type of problem. I get it's not trivial, but it seems like there should be an analytic solution - more or less
    • Comment by stuaxo:

      Oh, I wanted this years ago when phones had terrible microphones and audio codes.

      The idea was that at a gig loads of people would record and you could reconstruct a much better recording.

    • Comment by stragies:

      I look forward to exploring that github source drop.
    • Comment by andrewfromx:

      wow i just added https://news.ycombinator.com/item?id=22956082 a few days ago, on point no?
    • Comment by kohtatsu:

      Would be cool if Microsoft gave more shits about privacy.

      Edit: This would be cool if I trusted Microsoft to properly handle privacy.

  • 7.

    C program proofs with Frama-C and its weakest-precondition plugin [pdf]

    • Comment by ngneer:

      I remember Frama-C for its slicing. I have recently become interested in Datalog driven approaches, such as Souffle, for static analysis of large codebases. Anyone with experience or thoughts along those lines?
    • Comment by fizixer:

      So this system uses the spec language ACSL (I assume it's a formal spec language, FSL).

      How widely used is it? I've heard of TLA+ being very popular. Then there is Z notation, and about half a dozen or more others.

      If there is no standard FSL, do I have to learn a new FSL everytime I want to apply formal methods to a slightly different system I'm working with?

      Formal methods is a hard enough area. Lack of standardization makes it harder.

    • Comment by therein:

      I wasn't sure what WP stood for. My mind jumped to WordPress but that obviously wouldn't make sense.

      Turns out it stands for Weakest Precondition.

  • 8.

    Ask HN: What scientific phenomenon do you wish someone would explain better?

    • Comment by pjungwir:

      Quantum spin. Electrons aren't really spinning, right? But why do we call it spin? I know it has something to do with angular momentum. What are the possible values? Is it a magnitude or a vector? Is there a reason we call it "spin" instead of "taste" or some other arbitrary name? How do you change it? What happens to it when particles interact?
    • Comment by qubex:

      I find most explanations of the Equivalence Principle that lies at the foundation of General Relativity to be very lax.

      To wit, the idea is that you cannot distinguish whether you are in an accelerated frame or in a gravitational field; alternatively stated, if you’re floating around in an elevator you don’t know whether you’re freefalling to your doom or in deep sideral space far from any gravitational source (though of course, since you’re in an elevator car and apparently freefalling... I think we’d all agree on what’s most likely, but I digress).

      Anyway, what irks me that this is most definitely not true at the “thought experiment” level of theoretical thinking: if you had two baseballs with you in that freefalling lift, you could suspend them in front of you. If you were in deep space, they’d stay equidistant; if you were freefalling down a shaft, you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.

      Of course, they’d be moving slightly toward each other in both cases (because they attract gravitationally) but the tidal effect presents is additional and present in only one scenario, allowing one to (theoretically) distinguish, apparently violating the bedrock Equivalence Principle.

      I never see this point raised anywhere and I find it quite distressing, because I’m sure there’s a very simple explanation and that General Relativity is sound under such trivial constructions, but I haven’t been able to find a decent explanation.

    • Comment by phkahler:

      Wave function collapse. As far as I can tell there is no discernable difference between a particle whose wave function collapsed (perhaps via measurement with another entangled one) and one that hasn't.
    • Comment by ramboldio:

      Fourier Transforms. I'd wish I had a intuitive understanding of how they work. Until then I'm stuck with just believing that the magic works out.
    • Comment by Dutchie85:

      Why does time slow down/go faster with movement compared to another object.

      The well known example that if you travel into space you'd gain let's say 5 years and people on earth 25 in the same time or so.

      I just don't get it and I can't find any logic explanation.

      For instance: Two twins who came to live exactly at the same moment in the year 2000 and both die on their 75th birthday at the same time. One travels into space, the other stays on earth. Earth-brother dies on earthyear 2075,space-brother dies in earthyear 3050 or so...

      I know its Einstein's point but that just doesn't instantly make it correct to me.

    • Comment by daenz:

      Why a flat earth is impossible.
    • Comment by lpellis:

      Bell's theorem. It somehow proves that quantum physics is incompatible with local hidden variables, but I could never see an understandable explanation (for me at least) of just how it works.
    • Comment by pjungwir:

      Entropy. Sometimes you read that it's a measure of randomness; sometimes, information. Aren't randomness and information opposites?
    • Comment by anton_tarasenko:

      Subreddit /r/askscience does a good job at explaining science in plain words. I usually google "site:reddit.com/r/askscience/ __QUESTION__".

      The StackExchange sites have less coverage and answers tend to be more technical.

      University websites return reliable answers, but often neither short nor accessible.

    • Comment by npr11:

      Automatic differentiation. It's useful to so much computational work, but most people only get a cursory introduction to the topic (a rough intro to the minimum they need to know), whereas really understanding it seems to open up a lot of research.
    • Comment by crosser:

      Non-linear optic explanation from quantum standpoint (classical explanation is quite clear).
    • Comment by davidmanheim:

      Non-interactive zero knowledge proofs.

      ZK proofs have a number of good explainers, mostly using graph colorings. Non-interactive versions, however, require quite a bit more than that explanation allows - and despite asking experts, I still haven't found a good, basic explanation.

    • Comment by mynegation:

      Why tardigrades are so hardy, how their biology is so different?

      How immune system and medications work.

      Why some plastics are recyclable and others are not.

    • Comment by pgt:

      Gravity wells. I only realised in my 20s that the only reason satellites can orbit the Earth without crashing into the ground is by going sideways really, really fast. So as they inch closer to the ground, they also travel parallel to the ground fast enough so that they stay approximately the same height from the ground.
    • Comment by vvoyer:

      vertical alignment in CSS
    • Comment by qqqqquinnnnn:

      Another frustrating one - what is heredity? If it's possible to inherit something due to a shift in behavior (i.e. it's a cultural change that leads to a biochemical change), how does that connect neatly to mendalian inheritance?
    • Comment by VygmraMGVl:

      I had a very mathy explanation of Spinodal Decomposition in my graduate work. I wonder if there's a more intuitive explanation than just "that's how the energy landscape works".
    • Comment by plurinshael:

      Spin aka intrinsic angular momentum
    • Comment by airstrike:

      The one-electron universe is always a personal favorite. Though more a far-fetched theory than a proper "scientific phenomena", I'd be eager to learn more about it in layman's terms

      https://en.wikipedia.org/wiki/One-electron_universe

      https://www.youtube.com/watch?v=9dqtW9MslFk

    • Comment by curiousgal:

      Measure theory.
  • 9.

    Psychological techniques to practice Stoicism

    • Comment by Svip:

      The problems with Cynicism, Scepticism, Epicureanism and Stoicism is that they don't really adhear to the notion of 'everything in moderation'. The logical extremes either can lead to some genuinely useless approaches to life.

      If one should never worry about things that they cannot possibly control, even if it directly affects one's life, because we are just going to cease to exist at some point anyway, how would one now whether or not they could alter it, if they never began worrying? This very idea lead to several prominent Stoics to commit suicide, because might as well hasten my eventual ceasing of being?

      Perhaps if they had concerned themselves with things that on the surface seemed outside of their reach, they might have realised that some things are approachable, even if the solution is not obvious.

      The idea that one should avoid worry about things outside one's control is not a bad suggestion in general, it just should not be taken as an extreme. I mean, there is probably a reason why philosophers went back to Aristotle and Plato after those other four Schools saw prominence.

      Jewish, Christian and Islamic philosophers weren't trying to make their religions compatible with Zeno's or Epicurus' teachings, but rather Plato's and later Aristotle's.

    • Comment by NegativeLatency:

      > Willpower is like muscle power: the more exercise, the stronger they are; the more will power we have, the more self-control and courage we have.

      IIRC: studies have not validated this

    • Comment by kashyapc:

      For those who wish to truly deep-dive, I strongly suggest to skip the "meta books" on Stoicism, and go straight to the original works. There's the Big Three—Seneca, Epictetus, and Marcus Aurelius. Be prepared to invest at least ten months (the longer, the better) of active study to get a decent grounding.

      From my experience of reading multiple translations of the Big Three, for someone new to Stoicism, I'd suggest not to start with the popular recommendation of Marcus Aurelius.

      Start with Seneca's Letters, then Epictetus (an ex-slave, and a profound influence on Marcus Aurelius), and only then Marcus Aurelius, the Roman Emperor. (To quote the foremost Stoic scholar, A.A. Long: "[...] That an ex-slave actually shaped a Roman Emperor's deepest thoughts is one of the most remarkable testimonies to the power and applicability of Epictetus' words.")

      The quality of the English translation matters a lot. Here's my recommendations:

      • Seneca: Letters on Ethics — translation by Margaret Graver and A. A. Long. This is the most recent translation, reads extremely well, outstanding notes, and wonderfully typeset. It's done by the current foremost experts; can't get better than this. I've been reading this for four months. (If this is a tad pricey for you, there also Oxford and Penguin editions of a selection of Seneca's letters.)

      • Epictetus: Encheiridion, and Selections from Discourses, by A.A. Long. This is a short book; the value addition here is the great introduction, and the outstanding glossary. (NB: there is no escaping full Discourses of Epictetus—refer below.)

      • Epictetus: Discourses, Fragments and Handbook — translation by Robin Hard, intro by Christopher Gill; Oxford University Press. Spend a good four months immersing yourself in it. Epictetus is full of heavy irony, dark humor, histrionic wit, and sarcasm. Absolutely my favourite.

      • Epictetus: A Stoic and Socratic Guide, by A.A. Long. (Important Note: To get maximum value out of this, you must have already read at least one translation of Epictetus' full Discourses! This book orients the reader to Epictetus with an extremely valuable context: how not to misinterpret his unqualified faith in "divine providence" (which can grate on our "modern ears"); the influence of Plato and the "Socratic Elenchus"" (colloquially known as "Socratic Method"); deep insights into Epictetus' own inimitable style; and a rich bibliography.

      • Marcus Aurelius: Meditations. There are at least six translations. I'd suggest to start with the gentler translation by Gregory Hays. If you like it, then you can research other translations. (A.S.L Farquharson spent a lifetime on his translation of the Meditations; it also has commentary. I sometimes consult this edition.)

      • The Inner Citadel: The Meditations of Marcus Aurelius, by Pierre Hadot. This needs to be read only after you've read at least one translation of Marcus Aurelius This is a fantastic dissection of Aurelius' work—Hadot studied him for 25 years. Besides fresh translations of the Meditations, it also contains unparalleled summary of Epictetus, and many quotes of Seneca.

    • Comment by a-saleh:

      Just beware you actually practice stoicism, acceptance and the good stuff and not just dissociation ;-)
    • Comment by troughway:

      >https://hoanhan101.github.io/about

      I want psychologists to write a solid book on this subject.

      So far majority of posts that I have read on this subject have been by software developers, which strikes me bizarre. You can leave your Zen and Art of Motorcycle Maintenance out of this.

      Software developers are not qualified to write about this. They are clueless dingbats and don't know it.

      How about some proper sources on the subject?

    • Comment by exit:

      > A person’s virtue depends on their excellence as a human being, how well one performs the function for which humans were designed.

      humans weren't designed, they were selected for through a happenstantial process. random mutation is critical to this process, and so any one of us can be deeply at odds with whatever the majority are geared towards.

    • Comment by westurner:

      Stoicism https://en.wikipedia.org/wiki/Stoicism

      Meditations (Marcus Aurelius) https://en.wikipedia.org/wiki/Meditations

  • 10.

    Decrypt WhatsApp encrypted media files

    • Comment by natch:

      The article (the readme on github) takes the following quote wildly out of context:

      > A recent high-profile forensic investigation reported that “due to end-to-end encryption employed by WhatsApp, it is virtually impossible to decrypt the contents of the downloader [.enc file]

      This quote clearly means it is virtually impossible without the key. OF COURSE if you have full access to the device as a logged in user, then you can get access to the key and decrypt things that cannot be decrypted by others who do not have the key. Nothing to see here.

      At least to the author’s credit the FAQ answers below clarify this, but not after the lead in, which is all most people read, has already done the damage of dramatically planting the incorrect impression that someone has figured out how to break WhatsApp encryption.

    • Comment by sloshnmosh:

      There is an app on the google play store that claims to be an “antivirus/cleaner” app that abused Androids accessibility API’s to access WhatsApps media files. The developer called it a “WhatsApp cleaner”. The app was removed from the Play store for several weeks but was allowed to return. The developers now claim to PROTECT unauthorized access to WhatsApp’s media files. App is affiliated with China’s Qihoo that was booted long ago from the Play store for hijacking users WebView with fake virus warnings to boost installs. As this app has been doing every day since 2013.
    • Comment by Funes-:

      TL;DR: This program decrypts encrypted media files you yourself have received through WhatsApp, "in the same way that the WhatsApp app does to display it on the screen." It doesn't decrypt other people's media files, as the title could suggest.
    • Comment by smashah:

      As maintainer of https://github.com/open-wa/wa-automate-nodejs

      Every needs to be incredibly careful of being phished of their WhatsApp web logins.

      Also, WhatsApp does not respect message integrity regardless of e2e encryption. They WILL mutate your message if required.

    • Comment by ignoramous:

      Not just media, one could extract WhatsApp's cipher-key and message-db (not sure if it works on current versions) without requiring root: https://forum.xda-developers.com/showthread.php?t=2770982 (2016).

      And here's a desktop viewer to search through decrypted files: https://forum.xda-developers.com/showthread.php?t=1583021 (last updated: 2018).

    • Comment by jl6:

      I thought I’d take this opportunity to describe my recent experience submitting a bug report for WhatsApp on iOS.

      When you export a chat, you get a zip file containing the messages as plain text, plus any media files referenced in the chat. The .txt file unfortunately only contains the text-only messages, not the text captions for media items. I reported this as a bug and was told this was functioning as intended.

      So this is a warning to anyone who thinks they are backing up their WhatsApp chats via the export feature that their backups are incomplete.

      As a workaround, you can get hold of the ChatStorage.sqlite file from an iTunes backup of your phone. All text is in there but you obviously have to query the database and format it into a readable sequence of messages.

      This really, really sucks as a workflow and I hope if any WhatsApp engineers ever read this they start working on a real export feature.

    • Comment by seemslegit:

      > No. WhatsApp uses iOS Data Protection to encrypt user data files (including ChatStorage.sqlite) using the device-specific and unrecoverable hardware UID key as well as a key derived from the user's passcode. It may not be decrypted without physical access to the specific iOS device that created the file as well as knowledge of the user's passcode.

      Didn't we learn that not to be the case since presumably the device can still flash a new apple-signed firmware that would override this ?

    • Comment by pier25:

      > Can you help me decrypt someone's WhatsApp?

      Answering the most important questions.

    • Comment by dancemethis:

      Somewhat disheartening that the author believes by default that the encryption wasn't tampered with on the proprietary server side of this proprietary client.