hacker news with inline top comments    .. more ..    21 Aug 2017 Best
home   ask   best   2 years ago   
Why We Terminated Daily Stormer cloudflare.com
855 points by SamWhited  4 days ago   1513 comments top
r721 4 days ago 14 replies      
>This was my decision. This is not Cloudflares general policy now, going forward, Cloudflare CEO Matthew Prince told Gizmodo. I think we have to have a conversation over what part of the infrastructure stack is right to police content.

(from internal email)

>Let me be clear: this was an arbitrary decision. It was different than what Id talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. I called our legal team and told them what we were going to do. I called our Trust & Safety team and had them stop the service. It was a decision I could make because Im the CEO of a major Internet infrastructure company.


Essential Phone, available now essential.com
815 points by Garbage  3 days ago   668 comments top 5
zanny 3 days ago 12 replies      
Since no one else has, I'll take the piss out of this "hollier than thou" bullshit.

> Devices are your personal property. We wont force you to have anything you dont want.

Devices are your personal property. The SoC is still a proprietary trade secret, the baseband is still spying on you for the NSA, the GPU is still a closed blob piece of shit. No mainline driver support, bootloader is closed source, firmware is closed source. We own this phone, you don't.

> We will always play well with others. Closed ecosystems are divisive and outdated.


> Devices shouldnt become outdated every year. They should evolve with you.

Devices become outdated because shitty vendors refuse to open source and mainline drivers for their components.

> Technology should assist you so that you can get on with enjoying life.

Technology should be trustable, and a device where you cannot tell if or when the microphone and/or camera are recording and being remotely accessed is anything but.

Not wanting to single Essential out too much here - every vendor goes on and on about how great this phone is for you, while holding as much of a vice grip over the operation of the device as possible to make sure you need to buy another one as soon as possible through planned obsolescence. It is just the stick up the ass language announcements like these use is really infuriating when the people making them know full well how much they are screwing you over.

The first actually open platform phone is the one that will have longevity. The rest are snake oil about how good they will take of you because you can't take care of yourself with your own software that you can trust.

foobaw 3 days ago 4 replies      
As someone who worked in a large OEM company releasing tons of smartphones, I'm actually impressed it only took 100 people to getting this out. I presume there was an incredible amount of sleepless nights, as this is no easy task.

To be fair though, Sprint is one of the easier carriers to work with after T-Mobile. I can't imagine them releasing a phone on AT&T or Verizon, as their process is grueling. I guess since they're selling an unlockcked version of their phone, it doesn't really matter to power users. However, most sales for smartphones are from contracts sold directly from carriers so it'll be interesting to see how they'll do in the market with their current strategy (similar to One Plus One).

Props to them though. It's not just about carrier certification. Releasing a smartphone is a long complex process. Some engineers at Sprint were briefly talking about how great the phone was, so I have high hopes.

ariofrio 3 days ago 7 replies      
Give me software updates for 7+ years, then we'll talk about buying your $700 phone. Lasting hardware means nothing without lasting software.

In the meanwhile, I'll keep buying $120 phones (Moto G4 with Amazon Ads FTW) and keeping them for ~2 years until they break or software updates stop. Even though as a Catholic (Laudato Si, Rerum Novarum) it kills me to waste all those materials every couple of years and be part of the environmental degradation of our planet.

Hasz 3 days ago 9 replies      
You want fixable and well designed, long software updates, and a good price?

Buy an (old) iPhone.

I've got a 5S -- still perfectly fast for what I use it for (email, youtube, brokerage account, general internet, some small games), and is getting OS updates and security patches until IOS11. It's $120 on eBay; a new screen can be had for $13, a new battery for $11. it's solidly designed and there's a gigantic field of accessories and apps.

Maybe titanium and no bezels are worth a price premium, but there's no way it's worth a 5x increase in price.

git-pull 3 days ago 17 replies      
I admire the gumption of making a new phone.

But controlled obsolescence kills me. The real feature that improves in phones the past few years for me is the software and apps, not the hardware.

My wishlist:

- Give me a lighter, snappier OS. Not something clunkier and slower and uses more ram, gpu/cpu (aka battery life).

- Actually support updates to the things for longer than 2-3 years.

- (Not related to this phone) Use stock android, unless you're removing bloat. Why? Because inevitably there's going to be apps. What I want is a nice flat surface that includes wifi, bluetooth, and nice API's and permissions for those apps to plug into.

- The biggest feature you can give me on a phone? Battery life, Replaceable battery, Data/Cell reception, Speaker/Microphone quality.

- SIM card that's easy to get out.

- Actually, Dual SIM's.

- Support for carriers globally.

- And physical keyboards. Something for SSH'ing with.

Explaining React's license facebook.com
934 points by y4m4b4  2 days ago   558 comments top 4
kevinflo 1 day ago 17 replies      
I love love love love love react as a technology, but this is just awful. I believe any developer not on Facebook's payroll still contributing to React or React native at this point has a moral obligation to stop. I personally feel like such a fool for not taking all this seriously before the ASF gave me a wakeup call. React is a trojan horse into the open source community that Facebook purposely and maliciously steered over time to deepen their war chest. Maybe that's an overblown take, but they had a perfect opportunity here to prove me wrong and they didn't. The defensive cover they present here feels so paper thin.

Even if we paint all of their actions in the most favorable possible light, and even if the clause is a paper tiger as some have claimed, it doesn't matter. This is not how open source should work. We should not have to debate for years if a project's license is radioactive. Especially individual devs like myself who just want to use a great tool. We should be able to just use it, because it's open and that's what open means. This is so much worse than closed. It's closed masquerading as open.

DannyBee 1 day ago 3 replies      
So, i feel for them, having watched Google's open source projects be targeted by patent trolls in the past. But i really don't think this is the way forward.

A few things:

1. If you want to suggest you are doing this as part of an attempt to avoid meritless litigation, you really should give concrete examples of that happening. Otherwise, it comes off as a smoke screen.

2. The assertion is that if widely adopted, it would avoid lots of meritless litigation. This is a theoretically possible outcome. Here's another theoretically possible outcome of wide adoption of this kind of very broad termination language: Facebook is able to use other people's technology at will because nobody can afford to not use their stuff, and no startup that they decide to take technology from, and say "no more facebook/react/etc for you" could realistically launch an effective lawsuit before they died.Assume for a second you think Facebook is not likely to do this. If widely adopted, someone will do it.Nobody should have to worry about this possibility when considering whether to adopt particular open source software.

(there are other theoretical outcomes, good and bad).

It's also worth pointing out: None of this is a new discussion or argument. All of the current revisions of the major licenses (Apache v2, GPLv3) went through arguments about whether to use these kinds of broader termination clauses (though not quite as one sided and company focused), and ultimately decided not to, for (IMHO good) reasons. I'm a bit surprised this isn't mentioned or discussed anywhere.

These kinds of clauses are not a uniform net positive, they are fairly bimodal.

jwingy 2 days ago 2 replies      
I wonder how Facebook would feel if all the open source software they currently use incorporated the same license. I bet it would deter them from enjoying much of the code they built their business on. This stance seems pretty antithetical to the goal and spirit of open source software and I really hope it's not the beginning of other companies following suit and 'poisoning' the well.
eridius 2 days ago 3 replies      
> We've been looking for ways around this and have reached out to ASF to see if we could try to work with them, but have come up empty.

There's a pretty obvious solution to this: relicense React. The fact that Facebook isn't even considering that is a pretty strong indication that they "weaponized" their license on purpose.

> To this point, though, we haven't done a good job of explaining the reasons behind our BSD + Patents license.

I think we already understand the reasoning behind it.

> As our business has become successful, we've become a larger target for meritless patent litigation.

And the solution you chose stops merit-ful litigation as well.

> We respect third party IP, including patents, and expect others to respect our IP too.

Clearly you don't, because you've intentionally designed a license to allow you carte blanche to violate other companies' patents if they're dependent enough upon React to not be able to easily stop using it.

Vue.js vs. React vuejs.org
695 points by fanf2  1 day ago   444 comments top 9
pier25 1 day ago 14 replies      
We moved away from React to Vue about 8 months ago and everyone on the team is a lot happier.

First reason is we hate JSX. It forces you to write loops, conditionals, etc, outside of the markup you are currently writing/reading. It's like writing shitty PHP code without templates. It also forces you to use a lot of boilerplate like bind(), Object.keys(), etc.

Another problem with React is that it only really solves one problem. There is no official React router and we hated using the unofficial react-router for a number of reasons. A lot of people end up using MobX too.

With Vue there is no need to resort to third parties for your essential blocks. It provides an official router and store called Vuex, which IMO blows Redux out of the water when combined with Vue's reactive data.

Vue docs are probably one of the best I've used. They provide technical docs, plus excellent narrative docs (guides) for all their projects (Vue, Router, Vuex, templates, etc).

I won't say that Vue is perfect, but we would never go back to React.

If you don't like Vue but want to get out of React, check out Marko, the UI library by Ebay. It's better in every way than Vue or React except that the community and ecosystem are almost non existent.


a13n 1 day ago 19 replies      
You'll see quotes in this thread like "The demand for both React and Vue.js is growing tremendously" thrown around. It's good to check out npm install stats to get an unopinionated comparison.


In reality, React is downloaded roughly 4-5x more than angular and 7-8x more than Vue. In August so far, React has 75% market share among these three libs. Interestingly, this share has grown in August compared to both last month (July) and beginning of year (January).

While this thread and the license thread might indicate that React is dying, it's not. It's growing.

If Vue is going to be what React is today, it has quite a long way to go.

Kiro 1 day ago 7 replies      
I've built semi-large applications in both Vue.js and React. I like both but prefer React.

For me Vue.js is like a light-weight Angular 1, in a good way. It's very intuitive and you can start working immediately. It does however easily end up in confusion about where the state lives with the two-way binding. I've run into a lot of implicit state changes wrecking havoc. The declarative nature of React definitely wins here, especially working with stateless functional components. If you're serious about Vue you should adhere to unidirectional bindings, components and use Vuex.

The best thing about Vue.js for me is the single file components. It's such a nice feeling to know that everything affecting a certain component is right before your eyes. That's also the reason I started adapting CSS-in-JS in my React components.

The biggest problem for me with Vue.js is the template DSL. You often think "how do I do this complicated tree render in Vue's template syntax? In JSX I would just use JavaScript". For me, that was the best upgrade going from Angular to React and it feels like a step backwards when using Vue.js.

blumomo 1 day ago 2 replies      
In this thread people are fighting about their _opinions_ why they use Vue.js or React. And why X is really better than Y.

In reality these programmers don't want to have the feeling they might have made the wrong choice when they used X instead of Y. The idea that they might have taken the poorer choice hurts so much that they need to defend their decision so heavily while in reality taking ReactJS or Vue.js is like ordering pizza or pasta. You usually don't want to have both at the same time. So you need to explain why pizza is better than pasta tonight. Only that you usually have to stick longer around with Vue.js or ReactJS once chosen. Enjoy your choice and solve real problems, but stop fighting about it, programmers. Pasta and pizza will always both win.

spion 1 day ago 2 replies      
To me the whole idea of client-side HTML templates seems bad. They start out easy enough, but then they either limit you in power or introduce new and weird concepts to replace things that are easy, familiar and often better designed in the host language.

Here is an example on which I'd love to be proven wrong:


Its a generic spinner component that waits on a promise then passes off the fetched data to any other custom jsx. It can also take onFulfill and onReject handlers to run code when the promise resolves.

The concrete example shown in the fiddle renders a select list with the options received after waiting for a response from the "server". An onFulfill handler pre-selects the first option once data arrives. The observable selected item is also used from outside the spinner component.

With React+mobx and JSX its all simple functions/closures (some of them returning jsx), lexical scope and components. With Vue I'm not sure where to start - I assume I would need to register a custom component for the inner content and use slots?

kennysmoothx 1 day ago 2 replies      
I used React for a few years and it was great and powerful, there were many things however that I disliked.. Particularly I was not a fan of JSX. I liked React but I did not feel comfortable using it.

When I first saw VueJS I had a hard time understanding how it would be any better than React, that is until I saw single file components.


I fell in love with the eloquence of being able to separate my HTML, JS, Styles for a single component.. it seemed /right/ to me..

In any case, I've been using VueJS ever since for my new projects moving forward and I'm very happy with it. It has everything I would ever need from React but in what I feel is a more polished and thought-out way.

Just my two cents :)

conradfr 1 day ago 2 replies      
In a way VueJS is "React for those who liked Angular1".

I've done many Angular apps. I've done a bit of React (with Reflux & Browserify).

I tried moving to React/Redux/Webpack but it's not an easy task to grasp the whole thing. Webpack itself was close to make me throw the towel on side projects.

I tried VueJS because of a job interview and quite liked it and got productive really fast thanks to good documentation and my previous experience in angular & React.

Professionally I wouldn't mind any of those but for side projects it will be VueJS from now on.

As a side note I don't get why all the boilerplates always mix backend and frontend code and dependencies. If you're not interested in a node backend and learning it's overwhelming.

The worst thing is that boilerplates you find are always outdated (router, hot-reloading etc) and worst of all mingling server and client deps so if you're not interested in a node backend you have to

keyle 1 day ago 5 replies      
I've used both. What makes me pick Vue in the end is the fact that there is no compiler needed, no jsx and all the non-sense that goes with that.

If you want a full blown huge application to last years, then go Angular... Although who knows if Angular will be there in 5 or so years.

There is no perfect library/framework but I love Vue because Vue does exactly what it says on the tin.

ergo14 1 day ago 3 replies      
Another option that is interesting right now is Polymer 2.x if you haven't tried it recently, give it a shot.

https://vuejs.org/v2/guide/comparison.html#PolymerThere are some similarities shared between polymer and react/vue (personally only used react and angular 1.x before).

I've built applications with it using polyfills and things worked just fine with legacy applications on IE10 + jquery interacting with web components.

Performance is nice, there is more and more adoption from giant enterprises like Netflix, IBM, GE, Gannett, Electronics Arts, CocaCola, ING, BBVA.

Webcomponents.org has over 1k components to choose from and is growing.

Now with `lit-html` arriving soon we might see alternative to JSX if someone wants that, polymer-redux or polymer-uniflow is available as an option too.

https://hnpwa.com/ - one of fastest Hacker News implementations is based on Polymer - and that is even without SSR.

SvelteJS also seems nice, although it seems one-man project for now :( On Polymer end I hope that on the summit next week they will announce proper NPM support finally and I miss that.

Firefox Focus A new private browser for iOS and Android blog.mozilla.org
657 points by happy-go-lucky  2 days ago   324 comments top 24
progval 2 days ago 9 replies      
According to F-Droid [1], it contains `com.google.android.gms:play-services-analytics`.

[1]: https://gitlab.com/fdroid/rfp/issues/171#note_30410376

lol768 2 days ago 2 replies      
Have been using this a while, it's really nice as the default browser to open links in. Having the floating button to clear everything is neat and I like the UI desing. It's also really fast.

I'd like to see better support for getting SSL/TLS info - why can't I tap on the padlock and get the certificate info (EV, OV, DV?), cipher suite, HSTS etc?

hprotagonist 2 days ago 3 replies      
I installed Firefox Focus for iOS simply for its content blocker. I still prefer using mobile safari, but augmented with three content blockers:

- Firefox Focus, which blocks all sorts of stuff

- 1Blocker, which blocks all sorts of stuff

- Unobstruct, which blocks Medium's "dickbar" popups.

rcthompson 2 days ago 2 replies      
This is useful to use as your default browser. It has a quick way to open the same link in another browser, so you can use it as a sort of quarantine to vet unknown links before exposing your main browser and all its juicy user data to a new website.
ghh 1 day ago 2 replies      
Focus does not seem to erase your history in a way you may expect. Try this on Android:

- Erase your history.

- Go to HN, click any link you haven't clicked before.

- Wait for it to load.

- Erase your history. Make sure you see the notification "Your browsing history has been erased".

- Go to HN again, and see the link you've just clicked still highlighted as 'visited'.

Xoros 2 days ago 2 replies      
How is this news ? I installed it weeks ago on my IPhone. I don't understand why Mozilla just announced it now. Maybe it's a new version.

On the browser itself, I launched it, navigate on a URI, closed it, relaunched it, type the firsts characters of my previous URI and it auto completed it. From my history I guess.

So it's not like incognito mode on other browsers. (Haven't retested again)

bdz 2 days ago 8 replies      
I wish open source projects publish the compiled .apk file not just the source code.

If I want to install this on my Fire HD I either have to download the .apk from some dodgy mirror site or install Google Play with some workaround on the Fire HD. Cause Firefox Focus is not available in the Amazon App Store. I mean yeah I can do both in the end, not a big deal, but I just want the .apk nothing else.

computator 2 days ago 3 replies      
This would have been perfect for iPad 2's and 3's on which Safari and the normal Firefox keep crashing under the weight of the current bloated web.

But alas, the "simple and lightweight" Firefox Focus actually requires a heavyweight 64-bit processor:

> Why aren't older Apple products supported? Safari Content Blockers (which include Firefox Focus) are only available on devices with an A7 processor (64-bit) or later. Only 64-bit processors can handle the extra load of content blocking, which insures optimal performance. For example, since the iPad 3 has an A5 processor, Firefox Focus is incompatible.[1]

Come on, iPad 2's and 3's are less than 5 years old. There has to be some way to keep the iPad 2 or 3 alive if all you want to do browse the web.

[1] https://support.mozilla.org/en-US/kb/focus

cpeterso 2 days ago 1 reply      
Since I started using Firefox Focus for one-off searches, I'm surprised at how infrequently I really need to be logged into any websites to complete my task. Nice that Focus simply clears all those trackers and search history when I close it.
nkkollaw 2 days ago 5 replies      
So, if I understand this correctly... It's a regular browser, but like you're always in private mode + it's got a built-in ad blocker?

If I want to check Hacker News let's say 5 times throughout the day and feel like leaving a comment, I have to login again, without autocomplete..?

Maybe I'm missing something.

fiatjaf 2 days ago 1 reply      
> For example, if you need to jump on the internet to look up Muddy Waters real name

Best idea ever. That's the most common use case people have and one that's drastically underserved by current browsers.

If people can't get their browser to quickly open a link to simple stuff, it means the web is failing. If the web is failing they'll quickly jump over to sending images over WhatsApp or fall into the trap of using the Facebook app for all their needs that could be otherwise served by the web.

webdevatwork 2 days ago 1 reply      
Firefox Focus is great. It's amazing how much better web readability and performance gets when you block most of the adtech garbage.
ukyrgf 2 days ago 0 replies      
I love Focus. I wrote about it here[1], albeit poorly, but it just made me so happy to be able to use my phone again for web browsing. Sometimes I open Chrome and the tab that loads was something I was testing weeks prior... it's taken that big of a backseat to Firefox Focus.

[1]: https://epatr.com/blog/2017/firefox-focus/

x775 2 days ago 0 replies      
I have been using this for a while on one of my phones (OnePlus 5, newest version of OxygenOS) and am fairly satisfied with its overall performance. It works seamlessly for casual browsing - i.e. opening pages from Reddit or similar. I however cannot help but feel as if the standard version with appropriate extensions (i.e. Disconnect, uBlock Origin and thus forth) remains a better alternative than Focus in solving the very issues Focus seeks to accommodate. I do very much love how closing the browser erases everything though. It is worth mentioning that the ability to install extensions is exclusive to Android for now, so Firefox Focus has become my go-to-browser for my iOS devices. If you have Android the above is worth considering though!
st0le 2 days ago 1 reply      
Hasn't it been available for a while now?
gnicholas 2 days ago 2 replies      
I love Focus and now use it for almost all of my mobile googling. One thing that would be nice is a share extension, so that when I'm in Safari and see a link I want to open I can share it to Firefox Focus. Right now I have to "share" it to [copy], open Focus, and paste it in. Not a huge hassle, but would be nice to streamline.
noncoml 2 days ago 2 replies      
Looks awesome and fast. Exactly whats needed and expected from Mozilla. Thank you!

Can we have something similar for desktop as well?

api_or_ipa 2 days ago 3 replies      
Why can Firefox build a browser with 16mb and yet every other app on my phone is 80+mb?
byproxy 2 days ago 1 reply      
There is also the Brave browser, which I believe covers the same ground : https://play.google.com/store/apps/details?id=com.brave.brow...
wnevets 2 days ago 1 reply      
I've been using it as my default browser for Android for a while and I like it. The only thing I don't love is the notification saying the browser is open, it triggers my "OCD" . I understand why it's there but I wish there was some way around it.
bllguo 2 days ago 0 replies      
I've been loving focus. Fastest mobile browser I've used. Appreciate the privacy features also.

I set it to my default browser and keep chrome handy on the side.

AdmiralAsshat 2 days ago 0 replies      
Hmm. Just visited a few of the pages I normally visit on my phone in Firefox for Android, and immediately got several pop-ups and banners that don't normally get through.

So I'd say its adblocking is still less effective than regular Firefox for Android + uBlock Origin add-on.

It does feel quite speedy, though. Could possibly be what I start using in the future to read HN articles.

makenova 1 day ago 0 replies      
My favorite feature is that it blocks ads in safari. I'm surprised more people and Mozilla aren't mentioning it more.
hammock 2 days ago 1 reply      
The headline in this submission fails to deliver the primary message of the actual op, which is that Firefox focus is a lightweight mobile browser. That it blocks third-party tracking by default is secondary
E-commerce will evolve next month as Amazon loses the 1-Click patent thirtybees.com
598 points by themaveness  2 days ago   221 comments top 39
jaymzcampbell 2 days ago 4 replies      
Setting aside the madness that is the patent itself ever being granted, what I found most interesting on that post was that this could now (possibly) become an actual web standard in the future:

> the World Wide Web Consortium (W3C) has started writing a draft proposal for one click buying methods.

The W3C site itself has a number of web payment related proposals in progress[1]. The Payment Request API, in particular, looks pretty interesting (updated 2017-08-17). I wonder what a difference something like that would've made back in the day when I was bathed in Paypal SOAP.

[1] https://www.w3.org/TR/#tr_Web_Payments

tyrw 2 days ago 7 replies      
I ran an ecommerce company for about a year, and one click checkout was the least of our concerns when it came to Amazon.

The speed of delivery, prime benefits, brand recognition, and willingness to lose money on many if not most items are absolutely brutal to compete against.

I'm glad one click checkout will be more broadly available, but it's probably not going to make much of a difference...

NelsonMinar 2 days ago 1 reply      
The 1-Click patent was the genesis of a long debate between Jeff Bezos and Tim O'Reilly about software patents. It resulted in the formation of BountyQuest, a 2000-era effort to pay bounties for prior art for bad patents. Unfortunately it didn't really work out. But the history of arguing about software patents is pretty interesting. http://archive.oreilly.com/pub/a/oreilly//news/patent_archiv...
mseebach 2 days ago 7 replies      
The space (from my online shopping experience) seems to be divided between Amazon (with one click checkout, fast delivery etc) and everyone else (42 click checkout and one week delivery, if you're lucky).

If the one-click patent was a major inhibitor of competition, I'd basically expect to see a lot of two-click check out options. Instead I find myself creating a million redundant user accounts, telling people that my mothers maidenname is "khhsyebg" (she's got some Dothraki blood, it seems) and parsing "don't not uncheck the box if you wish to prevent us from causing the absence of non-delivery of our newsletter and also not abstaining from passing on your details to third parties".

dboreham 2 days ago 7 replies      
I have been buying from Amazon for 20 years and have not once used 1-Click.
pishpash 2 days ago 1 reply      
This patent prevented a nefarious checkout pattern across myriad potentially unscrupulous store fronts for more than a decade so was it really so bad? ;)

Some days I feel Amazon was not only the world's largest non-profit organization but also among its most beneficent!

drumttocs8 22 minutes ago 0 replies      
Huh? 1-Click patent? Does this mean I can literally patent a design choice?
masthead 2 days ago 7 replies      
Still can't believe that this was a patent!
TheBiv 2 days ago 4 replies      
NOTE that the Registered Trademark of "1-Click" will still be valid and owned by Amazon


romanhn 2 days ago 0 replies      
"E-commerce will change forever" ... strong words. Amazon has features that are a much bigger value proposition than one-click purchases. I don't see this changing the landscape in any significant way.
wheaties 2 days ago 2 replies      
"They have proposed ways of storing cards and address data in the browser..."

Oh hell no! Just what we need, yet another reason for people to attack your browser. Don't we already suggest to never use the "remember your password" button? Now, it's "remember your credit card." No. Please, just no.

dpflan 2 days ago 0 replies      
When the news about Soundcloud's future emerged, discussions turned through some thoughts about how to help SC keep its roots and grow into what it can be rather than be a Spotify competitor. The Amazon One-Click patent was brought up about how to allow buying the song / supporting the artist/record label you're enjoying.

Perhaps there is a chance now for SC (and others) to use this? (It'd be interesting to see how often the patent thwarted any business decisions. Also, I wonder if this was considering in the funding round...)

Here is the comment:> https://news.ycombinator.com/item?id=14991938

Here is the parent HN post:> https://news.ycombinator.com/item?id=14990911

philfrasty 2 days ago 0 replies      
...e-commerce will change forever...

Simply from a legal standpoint this is BS. In some countries you have to display the customer a whole bunch of information and terms before he can make the purchase.

Just because Amazon ignores this due to their size and $$$ doesn't mean everyone can.

10000100001010 2 days ago 0 replies      
I have never used one-click but I have relatives that compusively purchase off Amazon with one-click all of the time. It is almost a drug to them because they click a button and then stuff shows up at their door. For some users, removing all barriers except for a click is sufficient to get them to buy.
jwildeboer 2 days ago 0 replies      
As a former core developer of OSCommerce, where our users were threatened with patent infringement over exactly this, I will order a nice glas of whiskey, celebrating this thing is finally over. This one patent made me join the fight against software patents in Europe, which we sort of won in 2005.
novaleaf 2 days ago 1 reply      
anecdote: I use Amazon for practically all of my shopping, only supplementing it by going to a brick-and-mortar for food.

I have never used the "buy now" feature, so honestly I think it's impact is a bit overblown.

Here are my reasons I never use it:

1) I do a lot of comparison shopping, so I like to review my orders before the final purchase. (in case I put something in my cart and then later added something better)

2) I want to make sure I don't order something under $35 and get stuck paying for expedited shipping (which is free for prime members over $35 in purchases)

3) I have a few addresses and cards on file, and want to make sure the order will use the right one.

4) I use the cart as a temporary list, anything that looks interesting during my shopping session gets thrown in there (or perhaps another browser window if doing comparisons).

drcube 2 days ago 2 replies      
This is a "feature" I actively avoid. Why in the world would anyone want to buy something online without a chance to review their purchase? Other web pages don't even let you leave the page without asking "are you sure?".
clan 2 days ago 7 replies      
I have always hated the thought that retailers stored my credit card information. Seems to be very common with US based shops.

If this gets any traction I will need to fight even harder to opt out.

I yearn for the day I can have one off transaction codes.

stretchwithme 1 day ago 0 replies      
Buying things with 1 click is not an Amazon feature I've ever cared to use.

The right product at the right price, fast. That's what matters.

amelius 2 days ago 0 replies      
Reminds me of the joke I read somewhere about a "half-click patent", where the purchase is done on mousedown instead of on click.
ComodoHacker 1 day ago 0 replies      
I'm sure Amazon has already filed an application for "Zero-click checkout". Something like "swipe over a product image in a 'V' pattern to checkout", etc.
benmowa 2 days ago 0 replies      
"These are the ones [credit card processors] we have worked with in the past that we know use a card vault. Others likely support it too"

Note The more common term is credit card Tokenization, not just Vaulting, and is not required for 1-click if the merchant is retaining CC numbers. - although this is not recommended due to PCI and breach liability.

summer_steven 2 days ago 0 replies      
This is almost like a patent on cars that go above 60 MPH. Or a website that takes less than 50 ms to load.

They have a patent on the RESULT of technology. The patent SHOULD be on THEIR VERY SPECIFIC IMPLEMENTATION of 1-click checkout, but instead it is on all implementations that result in 1-click checkout.

Patents really are not meant for the internet...

blairanderson 1 day ago 0 replies      
Businesses use that shit. They don't have time and often don't care about the little details.

Businesses are the customers you want.

vnchr 2 days ago 0 replies      
Would anyone like something built to take advantage of this? I'm open next week between contracts (full-stack JS), maybe there is a browser extension or CMS plugin that would make this feature easy to implement?
wodenokoto 1 day ago 0 replies      
Does Amazon even use this themselves? I have fewer clicks going product page to purchase confirmation on Aliexpress.com than on Amazon.com
samsonradu 2 days ago 0 replies      
Interesting to find out such a patent even exists. Does this mean the sites on which I have seen the one-click feature implemented were until now breaking the patent?
dajohnson89 2 days ago 0 replies      
The # of returns are surely higher for 1-click purchases -- wrong address, wrong CC#, no chance to double-check you have the right size/color, etc.
nocoder 2 days ago 2 replies      
Does this mean the use of the term "1-click" will no longer be exclusive to Amazon or is that a part of some trademark type stuff?
tomc1985 2 days ago 0 replies      
Oh joy, now everyone's going to have that stupid impulse buy button. Yay consumerism, please, take my firstborn...
sadlyNess 2 days ago 0 replies      
Hope its going to be added to the payments ISO standards. If that's a fitting home along with the W3C move, is it?
perseusprime11 2 days ago 0 replies      
Amazon is eating the world.The loss of this patent will have zero sum impact.
ThomPete 2 days ago 1 reply      
So quick product idea.

Make a Magento integration that allow ecommerce sites to implement it?

radicaldreamer 2 days ago 0 replies      
Anyone know if a company other than Apple currently licenses 1-Click?
likelynew 2 days ago 0 replies      
Has there been any court case for the validity of this patent?
yuhong 2 days ago 0 replies      
I remember the history on Slashdot about it.
minton 2 days ago 0 replies      
Please stop calling this technology.
kiflay 2 days ago 0 replies      
pdog 2 days ago 1 reply      
> No one knows what Apple paid to license the technology [from Amazon]...

This is factually incorrect. Of course, there are executives at Amazon and Apple who know how much was paid to license the one-click patent.

Peanut allergy cured in majority of children in immunotherapy trial theguardian.com
479 points by DanBC  3 days ago   169 comments top 26
hanklazard 3 days ago 2 replies      
Physician-scientist here. My graduate work was in an immunology lab. Just wanted to clear up some confusion I've seen in multiple posts.

While both peanut allergy and celiac disease involve pathogenic immune responses, they represent very different types of problems and this study's results do not suggest any relevance to celiac.

The peanut allergies that they are referring to in this study are one of the most striking examples of what's known as a Type I hypersensitivity (IgE-mediated/anaphylaxis). In this type of reaction, high levels of IgE, a class of antibody, generated toward a specific antigen become loaded onto mast cells and on re-exposure, cause mast cell degranulation and subsequent smooth muscle contraction. For this reason, anaphylactic responses frequently involve closing of the airway, nausea/vomiting, and other dysregulations of smooth muscle activation and require a strong adrenurgic agonist like epinephrine to counteract this activation.

Celiac pathogenesis is not a Type I hypersensitivity. To my knowledge, the exact mechanism of pathogenesis is not known, but it is likely a combination of Type III (antibody-mediated) and Type IV (T-cell mediated) hypersenitivities.

Anyway, I'm not trying to ruin anyone's hope here, but this study has no relevance for celiac. What this has shown is that there is the potential for food allergies to be systematically eliminated with long-term increasing exposure to the problematic antigen, in this case, peanut antigen. This has been done for some time with other, less aggressive types of IgE-mediated conditions like dog and cat dander allergies. So in that way, it's not all that surprising of a result, but I'm certainly glad to see that this was able to be done safely. This is really great news for the millions of people out there with anaphylactic food allergies.

All that being said, I do hope that celiac can be managed more effectively with immune-modulatory (or other) treatments in the future and my sympathies go out to those who have been affected by this horrible disease.

S_A_P 3 days ago 7 replies      
My daughter has Celiac disease. It was diagnosed at age 4 when her growth chart showed she did not gain a single pound and grew " from age 3-4. We did a biopsy of her small intestine and it was completely smooth. (Should be almost like velvet) Herblood levels also showed high sensitivity to gluten. We have her on a strict gluten free diet and she has since followed the growth chart perfectly. However she is sensitive enough that she can not eat gluten free food that has been prepared on the same grill/pan/cook or prep surface as food containing gluten. She suffers from nausea and diarrhea when cross contamination occurs. What this means is that I have to cook every meal she eats and bring it with us if we go to restaurants. We live in probably the best time ever for gluten free foods, but this is still a significant hardship for her. She is 7 now and I worry about as she gets older and wants to hang with friends/date/college. Unless things change she cannot just go grab food at a restaurant. Some restaurants have a gluten free protocol (PF changs comes to mind) but this is not common. From what I've read gut bacteria could be a contributor to gluten intolerance. I really hope studies like the peanut allergy encourage other dietary studies and immunotherapy becomes more common. Her having celiac disease is not the end of the world but her quality of life would change drastically if she didn't have to worry about that.
gehwartzen 3 days ago 3 replies      
The AAP also recently changed its guidelines for introducing peanuts to babies based on a study [1] showing a pretty dramatic decline in the development of the allergy with early exposure vs total avoidance.


rhexs 3 days ago 4 replies      
Does anyone know the history of why allergists assumed this just wouldn't work for decades? I'm assuming they initially tried this at the dawn of the allergist specialization but gave up due to bad practices / deaths?

I only ask because it seemed to have been general knowledge that this was impossible / couldn't be done up until recently. As a outsider looking it, it seems quite obvious, but that's just due to naivete.

jwineinger 3 days ago 3 replies      
I'm a parent of a 4-year old with a peanut allergy. We've been told that anywhere from 18-25% of kids with it "outgrow" the allergy by age 5. I've been looking into private practice oral immunotherapy (OIT) recently, which this protocol seems to be a variant of (adding the bacteria). My understanding is that you start with a low dose and then gradually increase over months until you're eating whole peanuts (4-12 of them) in the morning and evening as a maintenance dose. From what I've found, this can work for many types of food allergies and for all ages and all sensitivities.
herewegohawks 3 days ago 1 reply      
Very severe peanut allergy here - honestly go away with this crap of comparing your gluten allergy. I have to carry an epipen and worry about risking my life when I so much as eat food that was on the same table as baked goods that MIGHT have traces of peanut butter.
sageikosa 3 days ago 4 replies      
When in her teens, my daughter developed a peanut allergy during her time in drum corps such that it was confirmed with skin patch tests and she had to carry an epi-pen. After about a year it just went away and she's back to "normal".
0xbear 3 days ago 1 reply      
True story: in Russia (and I can only assume other Eastern European countries) peanut allergy is so rare that I've never even heard of it until I emigrated. Pollen allergy is about the same, ragweed pollen allergy can be really bad too. But not peanut allergy.
6d6b73 3 days ago 0 replies      
I wonder if they had a control group taking only the bacteria, and another one taking only peanut proteins. If not, why did they decide on this combination?
matt_wulfeck 3 days ago 1 reply      
What's amazing to me is that they used to recommend you don't give children any peanuts until a specific age, but then they learned that easily exposure actually dramatically decreases the chance of developing an allergy.

I feel like I have to throw away almost all advice they give us about kids these days. These types of things do a lot to undermine the advice of doctors.

nsxwolf 3 days ago 0 replies      
This seems so obvious, and I've been hearing about this approach for years and years. Yet it still feels like 20 years from now, this will still not be a treatment, and kids classrooms will still be "nut free", and more and more kids will be carrying around epi-pens which will still cost a fortune.
manmal 3 days ago 3 replies      
Can we derive that Lactobacillus rhamnosus could reduce all kinds of allergies when taken, even without adding proteins that you are allergic to?
vanattab 3 days ago 4 replies      
Yumm... I can't wait for the shellfish version! I would love to try shrimp again and find out what all the fuss is about with lobster.
tmaly 3 days ago 0 replies      
My daughter is allergic to eggs, salmon, and fish in that similar family. Having vegan options in this modern day has been a real help.

I started my food side project https://bestfoodnearme.com with the idea in mind that I can catalog dishes at restaurants based on allergies, gluten free etc. Allergic reactions are a very scary thing especially with small children.

waterhouse 3 days ago 4 replies      
Could this be made to work on allergies in general? The article suggests it could at least be used for food allergies in general.
zeapo 3 days ago 0 replies      
A previous article (2015) talked about the same study http://www.abc.net.au/news/2015-01-28/probiotics-offer-hope-...
justinc-md 2 days ago 0 replies      
If you're in the bay area and considering OIT, a friend of mine is opening a private practice offering only OIT [0], starting next Wednesday in Redwood City. She is currently a full-time clinician at the Sean N. Parker Center for Allergy and Asthma Research at Stanford University.

Her clinic is relatively unique, in that it will be offering multi-allergen rapid desensitization. Using this procedure, a person can be desensitized to multiple allergens simultaneously, in as little as three months. She can treat milk, egg, wheat, soy, peanut, tree nut, fish, and shellfish allergies.

[0]: http://wmboit.com

cst 3 days ago 2 replies      
48 children were enrolled in the trial. Half of them were given the treatment and half the placebo, leaving 24 children in each group. Statistical significance testing is reported in the article and seems fairly robust, but this is too small a sample size to be fully confident in the results.
melling 3 days ago 0 replies      
Will this work in adults too?
LordKano 2 days ago 0 replies      
I have a young cousin who had a pretty severe nut allergy. After receiving chemo for cancer treatment, she was cured of both the cancer and the nut allergy.
matt_heimer 3 days ago 0 replies      
Someone watched the Princess Bride - I spent the last few years building up an immunity to peanut powder.
alfon 3 days ago 0 replies      
jordache 3 days ago 0 replies      
Is nut allergy a rising issue for other parts of the world?
Tade0 3 days ago 2 replies      
Interesting how this bacteria is a common ingredient in yogurt.
grb423 3 days ago 1 reply      
When I was a kid I never heard of peanut allergies. What happened? Did children's guts change? Did peanuts?
Why PS4 downloads are so slow snellman.net
670 points by kryptiskt  1 day ago   179 comments top 22
ploxiln 1 day ago 4 replies      
Reminds me of how Windows Vista's "Multimedia Class Scheduler Service" would put a low cap on network throughput if any sound was playing:


Mark Russinovich justified it by explaining that the network interrupt routine was just too expensive to be able to guarantee no glitches in media playback, so it was limited to 10 packets per millisecond when any media was playing:


but obviously this is a pretty crappy one-size-fits-all prioritization scheme for something marketed as a most-sophisticated best-ever OS at the time:


Many people had perfectly consistent mp3 playback when copying files over the network 10 times as fast in other OSes (including Win XP!)

Often a company will have a "sophisticated best-ever algorithm" and then put in a hacky lazy work-around for some problem, and obviously don't tell anyone about it. Sometimes the simpler less-sophisticated solution just works better in practice.

andrewstuart 1 day ago 4 replies      
Its bizarre because I bought something from the PlayStation store on my PS4 and it took DAYS to download.

The strange part of the story is that it took so long to download that the next day I went and bought the game (Battlefield 4) from the shop and brought it back home and installed it and started playing it, all whilst the original purchase from the PlayStation store was still downloading.

I ask Sony if they would refund the game that I bought from the PlayStation store given that I had gone and bought it elsewhere from a physical store during the download and they said "no".

So I never want to buy from the PlayStation store again.

Why would Sony not care about this above just about everything else?

erikrothoff 1 day ago 2 replies      
Totally unrelated but: Dang it must be awesome to have a service that people dissect at this level. This analysis is more in depth and knowledgable than anything I've ever seen while employed at large companies, where people are literally paid to spend time on the product.
g09980 1 day ago 4 replies      
Want to see something like this for (Apple's) App Store. Downloads are fast, but the App Store experience itself is so, so slow. Takes maybe five seconds to load search results or reviews even on a wi-fi connection.
cdevs 22 hours ago 1 reply      
As a developer people seemed surprised I don't have some massive gaming rig at home but there's something about it that feels like work. I don't want to sit up and be fully alert - I did that all day at work I want 30 mins to veg out on a console jumping between Netflix and some quick multiplayer game with less hackers glitchin out on the game. It seems impressive what PS4 attempts to accomplish while you're playing a game and yet try and download a 40gig game and some how tip toe in the background not screwing up the gaming experience. I couldn't imaging trying to deal with cranking up the speed here and there while keeping the game experience playable in a online game. Chrome is slow? Close you're 50 tabs, want faster PS4 downloads, close your games/apps. Got it.
ckorhonen 1 day ago 3 replies      
Interesting - definitely a problem I've encountered, though I had assumed the issues fell more on the CDN side of things.

Anecdotally, when I switched DNS servers to Google vs. my ISP, PS4 download speeds improved significantly (20 minutes vs. 20 hours to download a a typical game).

Reedx 1 day ago 3 replies      
PS3 was even worse in my experience - PS4 was a big improvement, although still a lot slower than Xbox.

However, with both PS4 and Xbox One it's amazingly slow to browse the stores and much of the dashboard. Anyone else experience that? It's so bad I feel like it must just be me... I avoid it as much as possible and definitely decreases the number of games I buy.

mbrd 1 day ago 0 replies      
This Reddit thread also has an interesting analysis of slow PS4 downloads: https://www.reddit.com/r/PS4/comments/522ttn/ps4_downloads_a...
jcastro 1 day ago 0 replies      
Lancache says it caches PS4 and XBox, anyone using this? https://github.com/multiplay/lancache

(I use steamcache/generic myself, but should probably move to caching my 2 consoles as well).

foobarbazetc 1 day ago 2 replies      
The CDN thing is an issue too.

Using a local DNS resolver instead of Google DNS helped my PS4 speeds.

The other "trick" if a download is getting slow is to run the in built "network test". This seems to reset all the windows back even if other things are running.

deafcalculus 6 hours ago 0 replies      
Why doesn't PS4 use LEDBAT for background downloads? Wouldn't this address the latency problem without sacrificing download speeds? AFAIK, Macs do this at least for OS updates.
tgb 1 day ago 6 replies      
Sorry for the newbie question, but can someone explain why the round trip time is so important for transfer speeds? From the formula I'm guessing something like this happens: server sends DATA to client, client receives DATA then sends ACK to server, server receives ACK and then finally goes ahead and sends DATA2 to the client. But TCP numbers their packets and so I would expect them to continue sending new packets while waiting for ACKs of old packets, and my reading of Wikipedia agrees. So what causes the RTT dependence in the transfer rate?
Tloewald 1 day ago 0 replies      
It's not just four years into launch since the PS3 was at least as bad.
lokedhs 16 hours ago 1 reply      
As one piece of information I offer my own experience with PSN downloads on the PS4.

I'm in Singapore and my normal download speed is around 250 Mb/s, sometimes getting closer to 300.

However, I sometimes download from the Swedish store as well, and those download speeds are always very slow. I don't think I've ever gone above one tenth of what I get with local downloads.

That said, bandwidth between Asia and Singapore are naturally more unpredictable, so I don't know if I can blame Sony here. My point is that PS4 downloads can be very fast, and the Singapore example is evidence of this fact.

sydney6 1 day ago 0 replies      
Is it possible that lacking TCP Timestamps in the Traffic from the CDN is causing the TCP Window Size Auto Scaling Mechanism to fail?

See this commit:


jumpkickhit 1 day ago 0 replies      
I normally warm boot mine, saw the speed increase with nothing running before, so guess I was on the right track.

I hope this is addressed by Sony in the future, or at least let us select if a download is a high priority or not.

tenryuu 1 day ago 1 reply      
I remember someone hacking at this issue a while ago. They blocked Sony Japan's server, of which the download was coming from. The Playstation the fetched the file from a more local server, of which the speed was considerable faster.

Really strange

lossolo 1 day ago 2 replies      
DNS based GEO load balancing/CDN's are wrong idea today. For example if you use DNS that has bad configuration or one that is not supplied by your ISP, then you could be routed to servers thousands km/miles from your location. Last time I've checked akamai used that flawed dns based system. What you want to use now is what for example cloudflare uses which is anycast IP. You just announce same IP class on multiple routers/locations and all traffic is routed to the nearest locations thanks to how BGP routing works.
hgdsraj 1 day ago 1 reply      
What download speeds do you get? I usually average 8-10 MB/s
bitwize 1 day ago 1 reply      
This is so that there's plenty of bandwidth available for networked play.

The Switch firmware even states that it will halt downloads if a game attempts to connect to the network.

frik 1 day ago 3 replies      
PS4 and Switch have at least no peer-to-peer download.

Win10 and XboxOne have peer-to-peer download - who would want that, bad for users, wasting upload bandwidth and counts against your monthly internet consumption. https://www.reddit.com/r/xboxone/comments/3rhs4s/xbox_update...

galonk 1 day ago 0 replies      
I always assumed the answer was "because Sony is a hardware company that has never understood the first thing about software."

Turns out I was right.

Afraid of Makefiles? Don't be matthias-endler.de
493 points by tdurden  3 days ago   264 comments top 41
ejholmes 2 days ago 15 replies      
Make's underlying design is great (it builds a DAG of dependencies, which allows for parallel walking of the graph), but there's a number of practical problems that make it a royal pain to use as a generic build system:

1. Using make in a CI system doesn't really work, because of the way it handles conditional building based on mtime. Sometimes you just don't want the condition to be based on mtime, but rather a deterministic hash, or something else entirely.

2. Make is _really_ hard to use to try to compose a large build system from small re-usable steps. If you try to break it up into multiple Makefiles, you lose all of the benefits of a single connected graph. Read the article about why recursive make is harmful: http://aegis.sourceforge.net/auug97.pdf

3. Let's be honest, nobody really wants to learn Makefile syntax.

As a shameless plug, I built a tool similar to Make and redo, but just allows you to describe everything as a set of executables. It still builds a DAG of the dependencies, and allows you to compose massive build systems from smaller components: https://github.com/ejholmes/walk. You can use this to build anything your heart desires, as long as you can describe it as a graph of dependencies.

chungy 3 days ago 7 replies      
I think the primary thing that makes people fear Makefiles is that they try learning it by inspecting the output of automake/autoconf, cmake, or other such systems. These machine-generated Makefiles are almost always awful to look at, primarily because they have several dozen workarounds and least-common-denominators for make implementations dating back to the 1980s.

A properly hand-tailored Makefile is a thing of beauty, and it is not difficult.

bluejekyll 3 days ago 7 replies      
Make is awesome. I have always loved make, and got really good with some of its magic. After switching to Java years ago, we collectively decided, "platform independent tools are better", and then we used ant. Man was ant bad, but hey! It was platform independent.

Then we started using maven, and man, maven is ridiculously complex, especially adding custom tasks, but at least it was declarative. After getting into Rust, I have to say, Cargo got the declarative build just right.

But then, for some basic scripts I decided to pick Make back up. And I wondered, why did we move away from this? It's so simple and straightforward. My suggestion, like others are saying, is keep it simple. Try and make declarative files, without needing to customize to projects.

I do wish Make had a platform independent strict mode, because this is still an issue if you want to support different Unixes and Windows.

p.s. I just thought of an interesting project. Something like oh-my-zsh for common configs.

raimue 3 days ago 1 reply      
By using pseudo targets only in the example and not real files, the article misses the main point of targets and dependencies: target rules will only be executed if the dependencies changed. make will compare the time of last modification (mtime) on the filesystem to avoid unnecessary compilation. To me, this is the most important advantage of a proper Makefile over a simple shell script always executing lots of commands.
rdtsc 2 days ago 4 replies      
Sneaky pro-tip - use Makefiles to parallelize jobs that have nothing to do with building software. Then throw a -j16 or something at it and watch the magic happen.

I was stuck on an old DoD redhat box and it didn't have gnu parallel or other such things and co-worker suggested make. It was available and it did the job nicely.

syncsynchalt 3 days ago 4 replies      
Today's simple makefiles are the end result of lessons hard learned. You'd be horrified to see what the output of imake looked like.

From memory here's a Makefile that serves most of my needs (use tabs):

 SOURCE=$(wildcard *.c) OBJS=$(patsubst %.c,%.o, $(SOURCE)) CFLAGS=-Wall # define CFLAGS and LDFLAGS as necessary all: name_of_bin name_of_bin: $(OBJS) $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) %.o: %.c $(CC) $(CFLAGS) -o $@ $^ clean: rm -f *.o name_of_bin .PHONY: clean all

martin_ky 2 days ago 0 replies      
Due to its versatility, Makefiles can be creatively used beyond building software projects. Case in point: I used a very simple hand-crafted Makefile [1] to drive massive Ansible deployment jobs (thousands of independently deployed hosts) and work around several Ansible design deficiencies (inability to run whole playbooks in parallel - not just individual tasks, hangs when deploying to hosts over unstable connection, etc.)

The principle was to create a make target and rule for every host. The rule runs ansible-playbook for this single host only. Running the playbook for e.g. 4 hosts in parallel was as simple as running 'make -j4'. At the end of the make rule, an empty file with the name of the host was created in the current directory - this file was the target of the rule - it prevented running Ansible for the same host again - kind of like Ansible retry file, only better.

I realize that Ansible probably is not the best tool for this kind of job, but this Makefile approach worked very well and was hacked together very quickly.

[1] https://gist.github.com/martinky/819ca4a9678dad554807b68705b...

AceJohnny2 2 days ago 3 replies      
"Build systems are the bastard stepchild of every software project" -- me a years ago

I've work in embedded software for over a decade, and all projects have used Make.

I have a love-hate relationship with Make. It's powerful and effective at what it does, but its syntax is bad and it lacks good datastructures and some basic functions that are useful when your project reaches several hundred files and multiple outputs. In other words, it does not scale well.

Worth noting that JGC's Gnu Make Standard Library (GMSL) [1] appears to be a solution for some of that, though I haven't applied it to our current project yet.

Everyone ends up adding their own half-broken hacks to work around some of Make's limitations. Most commonly, extracting header file dependency from C files and integrating that into Make's dependency tree.

I've looked at alternative build systems. For blank-slate candidates, tup [2] seemed like the most interesting for doing native dependency extraction and leveraging Lua for its datastructures and functions (though I initially rejected it due the the silliness of its front page.) djb's redo [3] (implemented by apenwarr [4]) looked like another interesting concept, until you realize that punting on Make's macro syntax to the shell means the tool is only doing half the job: having a good language to specify your targets and dependency is actually most of the problem.

Oh, and while I'm around I'll reiterate my biggest gripe with Make: it has two mechanisms to keep "intermediate" files, .INTERMEDIATE and .PRECIOUS. The first does not take wildcard arguments, the second does but it also keeps any half-generated broken artifact if the build is interrupted, which is a great way to break your build. Please can someone better than me add wildcard support to .INTERMEDIATE.

[1] http://gmsl.sourceforge.net

[2] http://gittup.org/tup/Also its creator, Mike Shal, now works at Mozilla on their build system

[3] http://cr.yp.to/redo.html

[4] https://github.com/apenwarr/redo

rrmm 3 days ago 3 replies      
Makefiles are easy for small to medium sized projects with few configurations. After that it seems like people throw up their hands and use autotools to deal with all the recursive make file business.

Most attempts to improve build tools completely replace make rather than adding features. I like the basic simplicity and the syntax, (the tab thing is a bit annoying but easy enough to adapt to).

It'd be interesting to hear everyone's go to build tools.

qznc 2 days ago 1 reply      
I love Make for my small projects. It still could be better. Here is my list:

* Colorize errors

* Hide output unless the command fails

* Automatic help command which shows (non-file) targets

* Automatic clean command which deletes all intermediate files

* Hash-based update detection instead of mtime

* Changes in "Makefile" trigger rebuilds

* Parallel builds by default

* Handling multi-file outputs

* Continuous mode which watches the file system for changes and rebuilds automatically

I know of no build system which provides these features and is still simple and generic. Tup is close, but it fails with LaTeX, because of the circular dependencies (generates and reads aux file).

wyldfire 3 days ago 2 replies      
> You've learned 90% of what you need to know about make.

That's probably in the ballpark, anyways.

The good (and horrible) stuff:

- implicit rules

- target specific variables

- functions

- includes

I find that with implicit rules and includes I can make really sane, 20-25 line makefiles that are not a nightmare to comprehend.

For a serious project of any scope, it's rare to use bare makefiles, though. recursive make, autotools/m4, cmake, etc all rear their beautiful/ugly heads soon enough.

But make is my go-to for a simple example/reproducible/portable test case.

mauvehaus 3 days ago 1 reply      
I feel like any discussion of make is incomplete without a link to Recursive Make Considered Harmful[0]. Whether you agree with the premise or not, it does a nice job of introducing some advanced constructs that make supports and provides a non-contrived context in which you might use them.

[0] http://aegis.sourceforge.net/auug97.pdf

Animats 3 days ago 2 replies      
The trouble with "make" is that it's supposed to be driven by dependencies, but in practice it's used as a scripting language.If the dependency stuff worked, you would never need

 make clean; make


misnome 2 days ago 0 replies      
Almost every build system (where I think it isn't controversial to say make is most often used) looks nice and simple with short, single-output examples to demonstrate the basis of a system.

It's when you start having hundreds of sources, targets, external dependencies, flags and special cases that it becomes hard to write sane, understandable Makefiles, which it presumably why people tend to use other systems to generate makefiles.

So sure, understanding what make is, and how it works is probably important, since it'll be around forever. But there are usually N better ways of expressing a build system, nowadays.

nstart 2 days ago 4 replies      
So I saw this and thought why not give it a try. How hard could it be right? My goal? Take my bash file that does just this (I started go just yesterday so I might be doing cross compiling wrong :D) :


export GOPATH=$(pwd)

export PATH=$PATH:$GOPATH/bin

go install target/to/build

export GOOS=darwin

export GOARCH=amd64

go install target/to/build


which should be simple. Right? Set environment variables, run a command. Set another environment variable, run a command.

45 minutes in and I haven't been able to quite figure it out just yet. I definitely figured out how to write my build.sh files in less than 15 minutes for sure when I started out.

pkkim 3 days ago 2 replies      
One important tip is that the commands under a target each run sequentially, but in separate shells. So if you went to set env vars, cd, activate a Python virtualenv, etc to affect the next command, you need to make them a single command, like:

 target: cd ./dir; ./script.sh

epx 2 days ago 1 reply      
Those who don't understand Make are condemned to reimplement it, poorly.
bauerd 3 days ago 1 reply      
I remember trying to wrap my head around the monstrosity that is Webpack. Gave up and used make, never looked back since
flukus 3 days ago 2 replies      
Personal blog spam, I learned make recently too and discovered it was good for high level languages as well, here is an example of building a c# project: http://flukus.github.io/rediscovering-make.html .

Now the blog itself is built with make: http://flukus.github.io/building-a-blog-engine.html

DangerousPie 2 days ago 2 replies      
If you want all the greatness of Makefiles without the painful syntax I can highly recommend Snakemake: https://snakemake.readthedocs.io/en/stable/

It has completely replaced Makefiles for me. It can be used to run shell commands just like make, but the fact that it is written in Python allows you to also run arbitrary Python code straight from the Makefile (Snakefile). So now instead of writing a command-line interface for each of my Python scripts, I can simply import the script in the Snakefile and call a function directly.


 rule make_plot: input: data = "{name}.txt" output: plot = "{name}.png" run: import my_package my_package.plot(input['data'], output['plot'], name = wildcards['name'])
Another great feature is its integration with cluster engines like SGD/LSF, which means it can automatically submit jobs to the cluster instead of running them locally.

rcarmo 2 days ago 1 reply      
These days, most of my projects have a Makefile with four or five simple commands that _just work_ regardless of the language, runtime or operating system in use:

- make deps to setup/update dependencies

- make serve to start a local server

- make test to run automated tests

- make deploy to package/push to production

- make clean to remove previously built containers/binaries/whatever

There are usually a bunch of other more specific commands or targets (like dynamically defined targets to, say, scale-frontends-5 and other trickery), but this way I can switch to any project and get it running without bothering to lookup the npm/lein/Python incantation du jour.

Having sane, overridable (?=) defaults for environment variables is also great, and makes it very easy to do stuff like FOOBAR=/opt/scratch make serve for one-offs.

Dependency management is a much deeper and broader topic, but the usefulness of Makefiles to act as a living document of how to actually run your stuff (including documenting environment settings and build steps) shouldn't be ignored.

(Edit: mention defaults)

rcthompson 3 days ago 0 replies      
For people who are more comfortable in Python, I highly recommend Snakemake[1]. I use it for both big stuff like automating data analysis workflows and small stuff like building my Resume PDF from LyX source.

[1]: https://snakemake.readthedocs.io/en/stable/

Joky 2 days ago 0 replies      
Make is fine for simple cases, but I'm working on a project that is based on buildroot right now, and it is kind of a nightmare: make just does not provide any good way at this scale to keep track of what's going on and inspect / understand what goes wrong. Especially in the context of a highly parallel build with some dependencies are gonna get missing.

In general also all the implicit it has makes it hard to predict what can happen. Again when you scale to support a project that would be 1) large and 2) wouldn't have a regular structure.

On another smaller scale: doing an incremental build of LLVM is a lot faster with Ninja compared to Make (crake-generated).

Make is great: just don't use it where it is not the best fit.

gtramont 2 days ago 0 replies      
Here's some tips I like to follow whenever writing Makefiles (I find them joyful to write): http://clarkgrubb.com/makefile-style-guide
rileytg 3 days ago 1 reply      
wow i've been feeling like not knowing make has been a major weakness of mine, this article has finally tied all my learning together. i feel totally capable of using make now. thank you.
vacri 2 days ago 1 reply      
One very important thing missing from this primer is that Make targets are not 'mini-scripts', even though they look like it. Every line is 'its own script' in its own subshell - state is not passed between lines.

Make is scary because it's arcane and contains a lot of gotcha rules. I avoided learning Make for a long time. I'm glad I did learn it in the end, though I wouldn't call myself properly fluent in it yet. But there are a ton of gotchas and historical artifacts in Make.

mauvehaus 3 days ago 3 replies      
Has anybody successfully used make to build java code? I realize there are any number of other options (ant, maven, and gradle arguably being the most popular).

In fact, I realize that the whole idea of using make is probably outright foolish owing to the intertwined nature of the classpath (which expresses runtime dependencies) and compile-time dependencies (which may not be available in compiled form on the classpath) in Java. I'm merely curious if it can be done.

zwischenzug 2 days ago 1 reply      
This is great, and needs saying.

Recently I wrote a similar blog about an alternative app pattern that uses makefiles:


fiatjaf 2 days ago 0 replies      
Makefiles are simple, but 99% of the existing Makefiles are computer-generated incomprehensible blobs. I don't want that.
user5994461 2 days ago 1 reply      
>>> Congratulations! You've learned 90% of what you need to know about

The next 90% will be to learn that Make breaks when having tabs and spaces in the same file, and your developers all use slightly different editors that will mix them up all the time.

leastangle 2 days ago 3 replies      
I did not know people are afraid of Makefiles. Maybe a nave question, but what is so scary about make?
systemz 2 days ago 0 replies      
Instead of makefile I can recommend Taskfile https://hackernoon.com/introducing-the-taskfile-5ddfe7ed83bd

Simple to use without any magic.

mschuster91 2 days ago 0 replies      
Please, don't ship your own Makefiles. Yes, autotools sucks - but there is one thing that sucks more: no "make uninstall" target.

Good people do not ship software without a way to get rid of it, if needed.

quantos 2 days ago 0 replies      
I had written Non Recursive Makefile Boilerplate (nrmb) for C, which should work in large projects with recursive directory structure. There is no need to manually add source file names in makefile, it automaically do this. One makefile compiles it all. Of course, it isn't perfect but it does the job and you can modify it for your project. Here is the link


Have a look :)

knowsuchagency 2 days ago 0 replies      
Make is fine, but I think we have better tools nowadays to do the same things.

Even though it may not have been originally intended as such, I've found Fabric http://docs.fabfile.org/en/1.13/tutorial.html to be far far more powerful and intuitive as a means of creating CLI's (that you can easily parametrize and test) around common tasks such as building software.

athenot 2 days ago 0 replies      
After using the various javascript build processes, I went back to good old makefiles and the result is way simpler. I have a target to build the final project with optimizations and a target to build a live-reload version of the project, that watches for changes on disk and rebuilds the parts as needed (thanks to watchify).

This works in my cases because I have browserify doing all the heavy lifting with respect to dependency management.

elnygren 2 days ago 0 replies      
I almost always roll a basic Makefile for even simple web projects. PHONY commands like "make run" and "make test" in every project make context switching a bit more easier.

While things like "npm start" are nice, not all projects are Node.js. In my current startup we're gonna have standardised Makefiles in each project so its easy to build, test, run, install any microservice locally :)

ojosilva 2 days ago 0 replies      
Opinion poll. I'm writing a little automation language in YAML and I was wondering if people prefer a dependency graph concept where tasks run parallel by default, unless stated as dependency, or a sequential set of instructions where tasks only run in parallel if explicitly "forked".

I'd say people would lean towards the former, but time and real world experience has shown that sequential dominates everything else.

bitwize 3 days ago 1 reply      
Or just use cmake and save yourself time, effort, and pain.
erAck 2 days ago 0 replies      
Take a look at the LibreOffice gbuild system, completely written in GNU make "language". And then come back saying you're not afraid of make ;-)

Still, it probably would be much harder, if possible at all (doubted for most), to achieve the same with any other tool mentioned here.

brian-armstrong 2 days ago 3 replies      
Using Cmake is so much nicer than make, and it's deeply cross-platform. Cmake makes cross-compiling really easy, while with make you have to be careful and preserve flags correctly. Much nicer to just include a cmake module that sets up everything for you. Plus it can generate xcode and visual studio configs for you. Doing make by hand just seems unncessary.
Why the Brain Needs More Downtime (2013) scientificamerican.com
427 points by tim_sw  2 days ago   104 comments top 10
laydn 2 days ago 12 replies      
I've been noticing that I'm more tired and need more downtime in days where I make, (or forced to make), critical decisions.

If I start the day by knowing what to do, then I don't really feel the burnout. For example, if I'm designing either a piece of hardware or firmware, and I know how to tackle the problem and it is just the matter of implementing it, I can code/design for 10 hours straight and when the workday ends, I still feel full of energy.

However, if the day is full of "decisions" (engineering or managerial), at the end of the day, I feel exhausted (and irritable, according to my family)

jmcgough 2 days ago 4 replies      
I find that I struggle with offices... you're stuck there for 8+ hours (even if you don't work that way, you need to create an impression), but after several hour of intense focus and the noise and chaos of an open office, I can feel drained and anxious. Some days I'll walk to a nearby park with wifi after work, meditate for a short bit, and then code from there. My focus and creativity comes right back after a bit of downtime in a relaxing space.
hasenj 2 days ago 8 replies      
I've always had a hard time sleeping/waking on time. What you might call a "night owl".

I'm starting to notice that on weekdays I actually perform better with 6 hours of sleep rather than 8 or 9. Then on the weekend I would "sleep in" to make up for the lost sleep time.

For some reason, if I sleep for 8 or 9 hours, I wake up feeling like I don't want to do anything. I don't feel sluggish or anything. I just feel "satisfied". Like there's nothing to be done. I can just "be". I can't bring myself to focus on any specific task. Nothing feels urgent.

When I sleep 6 hours, somehow I can focus more.

This is combined with not consuming caffeine. If I drink coffee after I have slept only for 6 hours, it makes me tired and sluggish.

ihateneckbeards 2 days ago 1 reply      
I noticed I can be intensely focused for about 4 to 6 hours max, after that I'll be "washed out" and I become error prone for complicated tasks

Unfortunately the 9 hour in office format constrain me to stay on my seat, so I'll try work on easier things at that time while beeing quite unproductive

How to we bring this fact to companies? It seems only the most ""progressive"" companies like Facebook or Google really understood this

dodorex 2 days ago 2 replies      
"Some researchers have proposed that people are also physiologically inclined to snooze during a 2 P.M. to 4 P.M. nap zoneor what some might call the afternoon slumpbecause the brain prefers to toggle between sleep and wake more than once a day."

Anecdotally, Thomas Edison was said to sleep only 3-4 hours a night and take frequent (very frequent) naps throughout the day.


danreed07 2 days ago 1 reply      
I'm ambivalent about this. I have a friend whose a Harvard math major, I've seen him work. He sleeps late and wakes early; when we work together, he always messes up my schedule by calling me in the middle of the night. I'm all tired and groggy the next day, and he's totally fine.

I think some people just inherently have more energy than others.

uptownfunk 2 days ago 3 replies      
I think I get a good six hours of actual work in the office. And then I need to check out and take a shower. Something about that after work shower just brings my focus and clarity right back. But if I have to crank with my team for a 12-15 hour day, after max 8 hours, we're all just physically there, but mentally have checked out long before that.

On sleep, 5-6 hours is optimal for me. Too much can be bad, I feel groggy and have brain-fog the rest of the day. I can get by on fewer for one day, but more than that and it becomes painful. I think a lot of this also has to do with lifestyle. How often and when do you eat, have sex, get sunlight, drink water, go out doors, etc. Many levels can be played with here.

Would be interested in hearing any hacks for getting by on less sleep.

pedrodelfino 9 hours ago 0 replies      
Great article. I remember seeing these ideas on Cal Newport's book, "Deep Work". I need more discipline to execute my "downtime plan".
qaq 2 days ago 0 replies      
Best option I experienced was working remotely from PST on EST schedule. So start at 6am done at 3 eat + have a drink take 1 hour nap and you have 8 hours which after nap fills like a whole new day.
nisa 2 days ago 0 replies      
I'm having a hard time organising and especially switching tasks and getting meaningful work done when multiple things that are unrelated fall together. Having a single thing do to and beeing able to just leave work would be great but at the moment I'm freelancing and having multiple jobs and doing sysadmin-style work, learning theory and programming in a new language really just kills me and I'm not getting much done. Once I get traction in a certain task it's okay but the constant switching is killing me.
What next? graydon2.dreamwidth.org
435 points by yomritoyj  1 day ago   141 comments top 26
fulafel 1 day ago 8 replies      
Again my pet ignored language/compiler technology issue goes unmentioned: data layout optimizations.

Control flow and computation optimizations have enabled use of higher level abstractions with little or no performance penalty, but at the same time it's almost unheard of to automatically perform (or even facilitate) the data structure transformations that are daily bread and butter for programmers doing performance work. Things like AoS->SoA conversion, compressed object references, shrinking fields based on range analysis, flattening/dernormalizing data that is used together, converting cold struct members to indirect lookups, compiling different versions of the code for different call sites based on input data, etc.

It's baffling considering that everyone agrees memory access and cache footprint are the current primary perf bottlenecks, to the point that experts recommend considering on-die computation is free and counting only memory accesses in first-order performance approximations.

z1mm32m4n 1 day ago 3 replies      
Grayson's very first answer to "what's next" is "ML modules," a language feature probably few people have experienced first hand. We're talking about ML-style modules here, which are quite precisely defined alongside a language (as opposed to a "module" as more commonly exists in a language, which is just a heap of somewhat related identifiers). ML modules can be found in the mainstream ML family languages (Standard ML, Ocaml) as well as some lesser known languages (1ML, Manticore, RAML, and many more).

It's really hard to do justice explaining how amazing modules are. They capture the essence of abstraction incredibly well, giving you plenty of expressive power (alongside an equally powerful type system). Importantly, they compose; you can write functions from modules to modules!

(This is even more impressive than you think: modules have runtime (dynamic) AND compile time (static) components. You've certainly written functions on runtime values before, and you may have even written functions on static types before. But have you written one function that operates on both a static and a dynamic thing at the same time? And what kind of power does this give you? Basically, creating abstractions is effortless.)

To learn more, I recommend you read Danny Gratzer's "A Crash Course on ML Modules"[1]. It's a good jumping off point. From there, try your hand at learning SML or Ocaml and tinker. ML modules are great!

[1]: https://jozefg.bitbucket.io/posts/2015-01-08-modules.html

Animats 1 day ago 3 replies      
One big problem we're now backing into is having incompatible paradigms in the same language. Pure callback, like Javascript, is fine. Pure threading with locks is fine. But having async/await and blocking locks in the same program gets painful fast and leads to deadlocks. Especially if both systems don't understand each other's locking. (Go tries to get this right, with unified locking; Python doesn't.)

The same is true of functional programming. Pure functional is fine. Pure imperative is fine. Both in the same language get complicated. (Rust may have overdone it here.)

More elaborate type systems may not be helpful. We've been there in other contexts, with SOAP-type RPC and XML schemas, superseded by the more casual JSON.

Mechanisms for attaching software unit A to software unit B usually involve one being the master defining the interface and the other being the slave written to the interface. If A calls B and A defines the interface, A is a "framework". If B defines the interface, B is a "library" or "API". We don't know how to do this symmetrically, other than by much manually written glue code.

Doing user-defined work at compile time is still not going well. Generics and templates keep growing in complexity. Making templates Turing-complete didn't help.

borplk 1 day ago 5 replies      
I'd say the elephant in the room is graduating beyond plaintext (projectional editor, model-based editor).

If you think about it so many of our problems are a direct result of representing software as a bunch of files and folders with plaintext.

Our "fancy" editors and "intellisense" only goes so far.

Language evolution is slowed down because syntax is fragile and parsing is hard.

A "software as data model" approach takes a lot of that away.

You can cut down so much boilerplate and noise because you can have certain behaviours and attributes of the software be hidden from immediate view or condensed down into a colour or an icon.

Plaintext forces you to have a visually distracting element in front of you for every little thing. So as a result you end up with obscure characters and generally noisy code.

If your software is always in a rich data model format your editor can show you different views of it depending on the context.

So how you view your software when you are in "debug mode" could be wildly different from how you view it in "documentation mode" or "development mode".

You can also pull things from arbitrarily places into a single view at will.

Thinking of software as "bunch of files stored in folders" comes with a lot baggage and a lot of assumptions. It inherently biases how you organise things. And it forces you to do things that are not always in your interest. For example you may be "forced" to break things into smaller pieces more than you would like because things get visually too distracting or the file gets too big.

All of that stuff are arbitrary side effects of this ancient view of software that will immediately go away as soon as you treat AND ALWAYS KEEP your software as a rich data model.

Hell all of the problems with parsing text and ambiguity in sytnax and so on will also disappear.

gavanwoolery 1 day ago 2 replies      
I like to read about various problems in language design, as someone who is relatively naive to its deeper intricacies it really helps broaden my view. That said I have seen a trend towards adding various bells and whistles to languages without any sort of consideration as to whether it actually, in a measurable way, makes the language better.

The downside to adding an additional feature is that you are much more likely to introduce leaky abstraction (even things as minor as syntactical sugar). Your language has more "gotchas", a steeper learning curve, and a higher chance of getting things wrong or not understanding what is going on under the hood.

For this reason, I have always appreciated relatively simple homoiconic languages that are close-to-the-metal. That said, the universe of tools and build systems around these languages has been a growing pile of cruft and garbage for quite some time, for understandable reasons.

I envision the sweet spot lies at a super-simple system language with a tightly-knit and extensible metaprogramming layer on top of it, and a consistent method of accessing common hardware and I/O. Instant recompilation ("scripting") seamlessly tied to highly optimized compilation would be ideal while I am making a wishlist :)

mcguire 1 day ago 3 replies      
[Aside: Why do I have the Whiley (http://whiley.org/about/overview/) link marked seen?]

I was mildly curious why Graydon didn't mention my current, mildly passionate affair, Pony (https://www.ponylang.org/), and its use of capabilities (and actors, and per-actor garbage collection, etc.). Then, I saw,

"I had some extended notes here about "less-mainstream paradigms" and/or "things I wouldn't even recommend pursuing", but on reflection, I think it's kinda a bummer to draw too much attention to them. So I'll just leave it at a short list: actors, software transactional memory, lazy evaluation, backtracking, memoizing, "graphical" and/or two-dimensional languages, and user-extensible syntax."

Which is mildly upsetting, given that Graydon is one of my spirit animals for programming languages.

On the other hand, his bit on ESC/dependent typing/verification tech. covers all my bases: "If you want to play in this space, you ought to study at least Sage, Stardust, Whiley, Frama-C, SPARK-2014, Dafny, F, ATS, Xanadu, Idris, Zombie-Trellys, Dependent Haskell, and Liquid Haskell."

So I'm mostly as happy as a pig in a blanket. (Specifically, take a look at Dafny (https://github.com/Microsoft/dafny) (probably the poster child for the verification approach) and Idris (https://www.idris-lang.org/) (voted most likely to be generally usable of the dependently typed languages).

carussell 1 day ago 5 replies      
All this and handling overflow still doesn't make the list. Had it been the case that easy considerations for overflow were baked into C back then, we probably wouldn't be dealing with hardware where handling overflow is even more difficult than it would have been on the PDP-11. (On the PDP-11, overflow would have trapped.) At the very least, it would be the norm for compilers to emulate it whether there was efficient machine-level support or not. However, that didn't happen, and because of that, even Rust finds it acceptable to punt on overflow for performance reasons.
mcguire 1 day ago 0 replies      
"Writing this makes me think it deserves a footnote / warning: if while reading these remarks, you feel that modules -- or anything else I'm going to mention here -- are a "simple thing" that's easy to get right, with obvious right answers, I'm going to suggest you're likely suffering some mixture of Stockholm syndrome induced by your current favourite language, Engineer syndrome, and/or DunningKruger effect. Literally thousands of extremely skilled people have spent their lives banging their heads against these problems, and every shipping system has Serious Issues they simply don't deal with right."


statictype 1 day ago 1 reply      
So Graydon works at Apple on Swift?

Wasn't he the original designer of Rust and employed at Mozilla?

Surprised that this move completely went under my radar

rtpg 1 day ago 2 replies      
The blurring of types and values as part of the static checking very much speaks to me.

I've been using Typescript a lot recently with union types, guards, and other tools. It's clear to me that the type system is very complex and powerful! But sometimes I would like to make assertions that are hard to express in the limited syntax of types. Haskell has similar issues when trying to do type-level programming.

Having ways to generate types dynamically and hook into typechecking to check properties more deeply would be super useful for a lot of web tools like ORMs.

bjz_ 1 day ago 2 replies      
I would love to see some advancements into distributed, statically typed languages that can be run on across cluster, and that would support type-safe, rolling deployments. One would have to ensure that state could be migrated safely, and that messaging can still happen between the nodes of different versions. Similar to thinking about this 'temporal' dimension of code, it would be cool to see us push versioning and library upgrades further, perhaps supporting automatic migrations.
dom96 1 day ago 0 replies      
Interesting to see the mention of effect systems. However, I am disappointed that the Nim programming language wasn't mentioned. Perhaps Eff and Koka have effect systems that are far more extensive, but as a language that doesn't make effect systems its primary feature I think Nim stands out.

Here is some more info about Nim's effect system: https://nim-lang.org/docs/manual.html#effect-system

simonebrunozzi 1 day ago 1 reply      
I would have preferred a more informative HN title, instead of a semi-clickbaity "What next?", e.g.

"The next big step for compiled languages?"

hderms 1 day ago 0 replies      
Fantastic article. This is the kind of stuff I go to Hacker News to read. Had never even heard of half of these conceptual leaps.
ehudla 18 hours ago 0 replies      
lazyant 1 day ago 3 replies      
What would be a good book / website to learn the concepts & nomenclature in order to understand the advanced language discussions in HN like this one?
touisteur 13 hours ago 0 replies      
ehnto 1 day ago 5 replies      
I know I am basically dangeling meat into lions den with this question; How has PHP7 done in regards to the Modules section or modularity he speaks of?

I am interested in genuine and objective replies of course.

(Yes your joke is probably very funny and I am sure it's a novel and exciting quip about the state of affairs in 2006 when wordpress was the flagship product)

msangi 1 day ago 1 reply      
It's interesting that he doesn't want to draw too much attention to actors while they are prominent in Chris Lattner's manifesto for Swift [1]

[1] https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...

jancsika 1 day ago 1 reply      
I'm surprised build time wasn't on the list.

Curious and can't find anything: what's the most complex golang program out there, and how long does it take to compile?

leeoniya 1 day ago 3 replies      
it's interesting that Rust isn't mentioned once in his post. i wonder if he's disheartened with the direction his baby went.
ilaksh 1 day ago 0 replies      
I think at some point we will get to projection editors being mainstream for programming, and eventually things that we normally consider user activities will be recognized as programming when they involve Turing complete configurability. This will be an offshoot of more projection editing.

I also think that eventually we may see a truly common semantic definitional layer that programming languages and operating systems can be built off of. It's just like the types of metastructures used as the basis for many platforms today, but with the idea of creating a truly Uber platform.

Another futuristic idea I had would be a VR projectional programming system where components would be plugged and configured in 3d.

Another idea might be to find a way to take the flexibility of advanced neural networks and make it a core feature of a programming language.

AstralStorm 1 day ago 3 replies      
Extra credit for whoever implements logic proofs on concurrent applications.
platz 1 day ago 2 replies      
whats wrong with software transactional memory?
baby 1 day ago 0 replies      
Can someone edit the title to something clearer? Thanks!
rurban 1 day ago 1 reply      
No type system improvements to support concurrency safety?
Blood Test That Spots Tumor-Derived DNA in Early-Stage Cancers hopkinsmedicine.org
357 points by ncw96  3 days ago   62 comments top 11
gourneau 3 days ago 4 replies      
I work for another player Guardant Health. We are the Liquid Biopsy market leaders right now. We just raised $360M Series E from SoftBank.

If you find this type of thing interesting and want to be part of it, we are hiring lots of folks. My team is looking for bioinformaticians, Python hackers, and machine learning people. Please reach out to me if you want to know more jgourneau@guardanthealth.com

AlexDilthey 2 days ago 0 replies      
All fair enough. The two big immediate challenges in the field are i) that the tumor-derived fraction of total cfDNA can be as low as 1:10000 (stage I) and ii) that it is difficult to make Illumina sequencing more accurate than 1 error in 1000 sequenced bases (in which case the 1:10000 signal is drowned out). This paper uses some clever statistical tricks to reduce Illimina sequencing error; one of these tricks is to leverage population information, i.e. the more samples you sequence the better your understanding of (non-cancer-associated) systematic errors. This follows a long tradition in statistical genetics of using multi-sample panels to improve analysis of individual samples. There are also biochemical approaches like SafeSeq or Duplex Sequencing to reduce sequencing error.

Not-so-obvious point #1 is that the presence of cancer-associated mutations in blood != cancer. You find cancer-associated mutations in the skin of older probands, and assumedly many of the sampling sites would never turn into melanomas. A more subtle point is that cfDNA is likely generated by dying cells, i.e. a weak cancer signature in blood might also be indicative of the immune system doing its job.

Point #2 is that it's not necessarily about individual mutations, which are, due to the signal-to-noise ratio alluded to above, difficult to pick up. One can also look at the total representation of certain genes in cfDNA (many cancers have gene amplifications or deletions, which are easier to pick up because they affect thousands of bases at the same time), and the positioning of individual sequenced molecules relative to the reference genome. It seems that these positions are correlated with gene activities (transcription) in the cells that the cfDNA comes from, and cancer cells have distinct patterns if gene activity.

conradev 3 days ago 1 reply      
There is also Freenome, which raised a $65m Series A to bring something similar to market:

> Last year, we raised $5.5 million to prove out the potential of this technology. Now, its time to make sure that its safe and ready for the broader population.


McKayDavis 3 days ago 1 reply      
I haven't read the referenced study, but I'm sure this is using the same (or very similar) cell free DNA (cfDNA) sequencing techniques currently used clinically for Non Invasive Prenatal Testing (NIPT) to screen for genetic defects such as trisomy 21 (Down Syndrome).

NIPT is a non-invasive blood screening test that is quickly becoming the clinical standard of care. Many insurance companies now cover the entire cost of NIPT screening for for at-risk pregnancies (e.g. women of "Advanced Maternal Age" (35yo+)). The debate is moving to whether it should be utilized/covered for average-risk pregnancies as well.

[1] http://capsprenatal.com/about-nipt/

[2] https://www.genomeweb.com/molecular-diagnostics/aetna-wont-c...

hprotagonist 3 days ago 1 reply      
Slowly but surely. This isn't even close to a real diagnostic, but it's a hopeful proof of concept.

I really do wish detection studies would publish a ROC curve, though, or at least d'.

maddyboo 3 days ago 4 replies      
Possibly a silly question, but is it possible for a 'healthy' person who doesn't have any cancer risk factors to get a test like this done?
melling 3 days ago 3 replies      
According to Craig Venter, early detection is what we need to eliminate cancer:


I guess most are treatable if caught early?

amitutk 3 days ago 3 replies      
Didn't Grail raise a billion dollars to do just this?
AlexCoventry 3 days ago 2 replies      
> They found none of the cancer-derived mutations among blood samples of 44 healthy individuals.

Is 98% specificity adequate for a cancer test?

ziggzagg 3 days ago 1 reply      
When this test has a near 100% success rate, how does it help the patients? Can it really prevent cancer?
jonathanjaeger 3 days ago 0 replies      
Tangent: I'm invested in a small cap stock, Sophoris Bio, that's in a P2B study for prostrate cancer with a drug developed out of Johns Hopkins called PRX302 (Topsalysin).

That and the article about blood tests shows there's a lot they're working on for noninvasive or minimally invasive procedures to help prevent cancer early on.

Facebook You are the Product lrb.co.uk
381 points by rditooait  4 days ago   286 comments top 9
notadoc 4 days ago 9 replies      
I stopped using Facebook years ago and I could not recommend it more. I found it to be mental pollution at best and and a total waste of time.

If you want to 'keep in touch' with people, call or text them. Make an effort to actually interact with the people who matter to you.

olympus 4 days ago 8 replies      
I'm here to fix some ignorance, since the source of the "you are the product" idea is not these books.

Metafilter user blue_beetle first put this idea online when he said "If you are not paying for it, you're not the customer; you're the product being sold" in response to the Digg revolt of 2010. The idea apparently existed for a few decades prior regarding TV advertising. I prefer to think blue_beetle was the one who brought it into the zeitgeist.



Edit: Alex3917 posted a similar idea on HN on 6 May 2010, beating blue_beetle by a couple months. Gotta give credit where it's due: https://news.ycombinator.com/item?id=15030959

akeck 4 days ago 7 replies      
I wonder if, in the future, being able not to be on any social media will be an higher class privilege.
phatbyte 4 days ago 0 replies      
I dropped all my social networks in the beginning of the year. I did for two main reasons.

First, for privacy concerns. FB, specially was getting to creepy for me. I felt, every action I did was being analyzed and filtered, I felt like I was a lab rat. The fact that these companies know so much about us is pretty scary, I felt like I needed to regain my privacy, fight the system somehow.

Second reason was because, I wasn't getting anything substantial that could improve my life overall. All I saw was dumb-ass posts, ignorant comments, the passive aggressiveness, the "look at me doing this really mundane thing, but please like my picture so I can feel validated", etc... feels like a mouse-cat race to see which of us has a better life or something. I honestly feel bad for how much time I spent there when I could apply that time to learn new things.

After more than 6 months without FB, here's what I've learned:

- I still keep in touch with my closest friends, we chat on slack/iMessage every day. It's actually a good way to know who really misses you, during this time, only about 5% of my FB friends reached out to me through message or phone to ask how were things in life. The other 95%, I really don't even remember most of their names anymore. Just ask yourselves, why do we have to share so much of our lives with so many "friends"? I know we can filter, and create groups, etc.. but damn...do you really want to spend your life "managing" relationships, to see who sees what? I find that tiresome.

- I don't feel left out of anything, because I keep track of local events using other sources, I read news from faithful websites, and if I need to share anything I just use the old email or show face-to-face any pictures I need of my latest vacation from my phone without having to share anything with anyone.

- I gain more time, less stress, I don't feel overwhelmed to keep track of every social media update. I just don't care. If something important happens I will know it sooner or later.

- I no longer have this need to constantly keep posting photos of what I'm doing outdoors or whatever. I don't have the need to feel validated by anyone but myself.

- But most importantly, I regained my privacy, or at least my social footprint is bare none at this point. I'm using uBlock, Firefox, DuckDuckGo and other tools to keep trackers at bay.

I may never completely win this war, but at least my habits aren't being recorded and feed to any ML algorithm.

grwthckrmstr 3 days ago 1 reply      
I'm using Facebook to earn my "fuck you money".

The advertising tools are so powerful it is downright scary, the level of targeting one can do using it is just insane.

That's partly the reason why I stopped posting updates. After seeing the depth of the advertising tools.

I don't use Facebook for posting personal updates anymore but only to fuel my business. I realise that the only way I can "choose" to stay out of all these services that track and sell our identity to advertisers is if I have "fuck you money" (money is the currency you exchange for your limited time in order to survive in this world).

adrianlmm 4 days ago 2 replies      
I've been using Facebook for years, awesome tool, I'm in contact with friends, relatives and parters, it is awesome.
amrrs 4 days ago 0 replies      
Cal Newport has been saying things like Facebook and other SM are engineered to be addictive and we've constantly seeing Youths falling for it. Adam Alter made a similar comment that when we've got a proper regulation for substances, why not for something like social media?

Fb is not just making us another node in a vast network graph but also ensuring a worst boring grown-ups who can't do anything worthy but post an Fb post condemning something and feeling great about their social responsibility.

occultist_throw 4 days ago 2 replies      

Any service online, where you do not explicitly pay money for goods/services rendered, you can rest assured that you are paying with data or influence (advertisement).

HN is no different. They control the news, and how the news is displayed. They run the YCombinator venture capital fund. You do not pay them, but they control influence (advertising). I would expect different if I paid YC for news access... But I dont.

kristianc 4 days ago 0 replies      
> Whatever comes next will take us back to those two pillars of the company, growth and monetisation. Growth can only come from connecting new areas of the planet.

This is a questionable assertion. Giant tech companies like Oracle and IBM don't tend to expand in this way, they make acquisitions of smaller companies, and use them to enhance the platform capabilities of the larger product.

I'm sure Zuck will be delighted if the "bottom billion" do all sign up and use Facebook, but they're never going to be massively profitable accounts.

Imo the acquisitions of Instagram and WhatsApp show the way that Facebook will go - Instagram adds a new and lucrative ad format, a profitable user segment and a base for adding in ideas from other platforms, such as Snapchat. WhatsApp builds out Facebook's graph and can be mined for intel.

Ideal OS: Rebooting the Desktop Operating System joshondesign.com
428 points by daureg  12 hours ago   247 comments top 78
joshmarinacci 5 hours ago 5 replies      
I'm the original author. I hadn't planned to publicize this yet. There are still some incomplete parts, broken links, and missing screenshots. But the Internet wants what it wants.

Just to clarify a few things.

I just joined Mozilla Devrel. None of this article has anything to do with Mozilla.

I know that none of the ideas in this article are new. I am a UX expert and have 25 years experience writing professional software. I personally used BeOS, Oberon, Plan 9, Amiga, and many others. I read research papers for fun. My whole point is that all of this has been done before, but not integrated into a nice coherent whole.

I know that a modern Linux can do most of these things with Wayland, custom window managers, DBus, search indexes, hard links, etc. My point is that the technology isn't that hard. What we need is to put all of these things into a nice coherent whole.

I know that creating a new mainstream desktop operating system is hopeless. I don't seriously propose doing this. However, I do think creating a working prototype on a single set of hardware (RPi3?) would be very useful. It would give us fertile playground to experiment with ideas that could be ported to mainstream OSes.

And thank you to the nearly 50 people who have signed up to the discussion list. What I most wanted out of this article was to find like minded people to discuss ideas with.

Thanks, Josh

Damogran6 11 hours ago 4 replies      
So what he's saying is: REmove all these layers because they're bad, but add these OTHER layers because they're good.

Thats how you make another AmigaOS, or Be, I'm sure Atari still has a group of a dozen folks playing with it, too.

The OS's over the past 20 years haven't shown much advancement because the advancement is happening higher up the stack. You CAN'T throw out the OS and still have ARkit. A Big Bloated Mature Moore's Law needing OS is also stable, has hooks out the wazoo, AND A POPULATION USING IT.

4 guys coding in the dark on the bare metal just can't build an OS anymore, it won't have GPU access, it won't have a solid TCP/IP stack, it won't have good USB support, or caching, or a dependable file system.

All of these things take a ton of time, and people, and money, and support (if you don't have money, you need the volunteers)

Go build the next modern OS, I'll see you in a couple of years.

I don't WANT this to sound harsh, I'm just bitter that I saw a TON of awesome, fledgling, fresh Operating systems fall by the wayside...I used BeOS, I WANTED to use BeOS, I'da LOVED it if they'd won out over NeXT (another awesome operating system...at least that survived.)

At a certain level, perhaps what he wants is to leverage ChromeOS...it's 'lightweight'...but by the time it has all the tchotchkes, it'll be fat and bloated, too.

jcelerier 9 hours ago 2 replies      
> Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible.

that's absolutely possible on linux with i3wm for instance

> I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.

awk and sed, no, but there are many CLI tools that accept video streams through pipe. e.g. FFMPEG. You wouldn't open your video through a GUI text editor, so why would you through CLI text editors ?

> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

Sure they are, on linux: https://linux.die.net/man/1/wmctrl

Fifteen years ago people were already controlling their WM through dbus: http://wiki.compiz.org/Plugins/Dbus#Combined_with_xdotool

The thing is, no one really cares about this in practice.

cs702 11 hours ago 7 replies      
Yes, existing desktop applications and operating systems are hairballs with software layers built atop older software layers built atop even older software layers.

Yes, if you run the popular editor Atom on Linux, you're running an application built atop Electron, which incorporates an entire web browser with a Javascript runtime, so the application is using browser drawing APIs, which in turn delegate drawing to lower-level APIs, which interact with a window manager that in turn relies on X...

Yes, it's complexity atop complexity atop complexity all the way down.

But the solution is NOT to throw out a bunch of those old layers and replace them with new layers!!!

Quoting Joel Spolsky[1]:

"Theres a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: Its harder to read code than to write it. ... The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and theyve been fixed. ... When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

jacinabox 6 minutes ago 0 replies      
In regards to the issue of file systems being non-searchable, it's definitely worth taking a look at compressed full-text indexes: http://pizzachili.dcc.uchile.cl/resources/compressed_indexes...

Under this scheme each file on disk would be stored as an index with constant factor overhead. The original file is not needed; all of the data can be decoded out of the index.

spankalee 11 hours ago 2 replies      
This sounds a lot like Fuchsia, which is all IPC-based, has a syncable object-store[1], a physically-based renderer[2], and the UI is organized into cards and stories[3] where a story is "a set of apps and/or modules that work together for the user to achieve a goal.", and can be clustered[4] and arranged in different ways[4].

[1]: https://fuchsia.googlesource.com/ledger/

[2]: https://fuchsia.googlesource.com/escher/

[3]: https://arstechnica.com/gadgets/2017/05/googles-fuchsia-smar...

[4]: https://fuchsia.googlesource.com/sysui/#important-armadillo-...

[5]: https://fuchsia.googlesource.com/mondrian/

alexandercrohde 10 hours ago 4 replies      
I really don't understand the negativity here. I sense a very dismissive tone, but most of the complaints are implementation details, or that this has been tried before (so what?).

I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.

-- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).

-- Yes, it should be a database design, with permissions.

-- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.

-- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up.[Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])

-- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).

-- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.

-- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)

I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.

noen 11 hours ago 7 replies      
As a current developer, former 10 year UX designer, and developer before that, this kind of article irks me to no end.

He contradicts his core assertion (OS models are too complex and layered) with his first "new" feature.

Nearly everything on this manifesto has been done before, done well, and many of his gripes are completely possible in most modern OS's. The article just ignores all of the corner cases and conflicts and trade-offs.

Truly understanding the technology is required to develop useful and usable interfaces.

I've witnessed hundreds of times as designers hand off beautiful patterns and workflows that can't ever be implemented as designed. The devil is in the details.

One of the reasons Windows succeeded for so long is that it enabled people to do a common set of activities with minimal training and maximizing reuse of as few common patterns as possible.

Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.

dcow 8 hours ago 0 replies      
Android already tried things like a universal message bus and a module-based architecture and while nice it doesn't quite live up to the promise for two reasons:

1. Application devs aren't trained to architect new software. They will port old shitty software patterns from familiar systems because there's no time to sit down and rewrite photoshop for Android. It's sad but true.

2. People abuse the hell out of it. Give someone a nice thing and someone else will ruin it whether they're trying to or not. A universal message bus has security and performance implications. Maybe if Android was a desktop os not bound by limited resources it wouldn't have pulled out all the useful intents and neutered services, but then again the author's point is we should remove these complex layers and clearly the having them was too complex/powerful/hungry for android.

I do think there's a point to be made that we're very mouse and keyboard centric at the primitive IO level and in UI design. I always wondered what the "command line" would look like if it was more complex than 128 ascii characters in a 1 dimensional array. But it probably wouldn't be as intuitive for humans to interface with unless you could speak and gesture to it as the author suggests.

avaer 11 hours ago 4 replies      
> I suspect the root cause is simply that building a successful operating system is hard.

It's hard but not that hard; tons of experimental OS-like objects have been made that meet these goals. Nobody uses them.

What's hard is getting everyone on board enough for critical inertia to drive the project. Otherwise it succumbs to the chicken-and-egg problem, and we continue to use what we have because it's "good enough" for what we're trying to do right now.

I suspect the next better OS will come out of some big company that has the clout and marketing to encourage adoption.

ghinda 7 hours ago 0 replies      
You have most of these, or at least very similar versions, in Plasma/KDE today:

> Document Database

This is what Akonadi was when when it came out for 4.x. Nepomuk was the semantic search framework so you could rate/tag/comments on files and search by them. They had some performance problems and were not very well received.

Nepomuk has been superseded by Baloo, so you can still tag/rate/comment files now.

Most KDE apps also use KIO slaves:https://www.maketecheasier.com/quick-easy-guide-to-kde-kio-s...

> System Side Semantic Keybindings

> Windows

Plasma 4 used to have compositor-powered tabs for any apps. Can't say if it will be coming back to Plasma 5.Automatic app-specific colors (and other rules) are possible now.

> Smart copy and paste

The clipboard plasmoid in the system tray has multiple items, automatic actions for what to do with different types of content and can be pinned, to remain visible.

> Working Sets

These are very similar to how Activities work. Don't seem to be very popular.

antoineMoPa 10 hours ago 2 replies      
I appreciate the article for its coverage of many OS (including BeOS, wow, I should try that). What about package management though? Package management really defines the way you live under your flavor of linux, and there is a lot of room for improvement in current package managers (like decentralizing them, for example).


> I know I said we would get rid of the commandline before, but I take that back. I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer [...]

I can't agree with that, it is the plain text nature of the command line that makes it so useful and simple once you know a basic set of commands (ls,cd,find,sed,grep + whatever your specific task needs). Plain text is easy to understand and manipulate to perform whatever task you need to do. The moment you learn to chain commands and save them to a script for future use, the sky is the limit. I do agree with using voice to chain commands, but I would not complain about the plain text nature and try to bring buttons or other forms of unneeded complexity to command-line.

nwah1 10 hours ago 1 reply      
I agree with a lot of the critics in the comments, but I will say that the author has brought to my attention a number of features that I'm now kind of upset that I don't have.

I always thought LED keyboards were stupid because they are useless, but if they could map to hotkeys in video players and such, that could be very useful, assuming you can turn off the LEDs.

His idea for centralized application configs and keybindings isn't bad if we could standardize using something like TOML . The Options Framework for Wordpress plugins is an example of this kind of thing, and it does help. It won't be possible to get all the semantics agreed upon, of course, but maybe 80% is enough.

Resurrecting WinFS isn't so important, and I feel like there'd be no way to get everyone to agree on a single database unless every app were developed by one team. I actually prefer heterogeneity in the software ecosystem, to promote competition. We mainly need proper journalling filesystems with all the modern features. I liked the vision of Lennart Poettering in his blog post about stateless systems.

The structured command line linked to a unified message bus, allowing for simple task automation sounds really neat, but has a similar problem as WinFS. But I don't object to either, if you can pull it off.

Having a homogenous base system with generic apps that all work in this way, with custom apps built by other teams is probably the compromise solution and the way things have trended anyways. As long as the base system doesn't force the semantics on the developers, it is fine.

jmull 10 hours ago 1 reply      
This isn't worth reading.

(It's painfully naive, poorly reasoned, has inaccurate facts, is largely incoherent, etc. Even bad articles can serve as a nice prompt for discussion, but I don't even think this is even good for that. I don't we'd ever get past arguing about what it is most wrong about.)

lake99 11 hours ago 1 reply      
> Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need

I don't know what he means by "traditional", but Linux native filesystems can store all the metadata you'd want.

> Why can't I have a file in two places at once on my filesystem?

POSIX compatible filesystems have supported that for a long time already.

It seems to me that all the things he wants are achievable through Plan9 with its existing API. The only thing missing is the ton of elbow grease to build such apps.

xolve 11 hours ago 0 replies      
Not an ideal article for anything. Looks like written with limited research, that by the end of it I an hardly keep focus.

> Bloated stack.True, there are options which author hasn't discussed.

> A new filesystem and a new video encoding format.Apple created new FS and video format. These are far more fundamental changes to be glossed over as trivial in a single line.

> CMD.exe, the terminal program which essentially still lets you run DOS apps was only replaced in 2016. And the biggest new feature of the latest Windows 10 release? They added a Linux subsystem. More layers piled on top.Linux subsytem is a great feature of Windows. Ability to run bash on Windows natively, what's the author complaining about?

> but how about a system wide clipboard that holds more than one item at a time? That hasn't changed since the 80s!Heard of Klipper and similar app in KDE5/Plasma. Its been there for so long and keeps text, images and file paths in clipboard.

> Why can't I have a file in two places at once on my filesystem?Hard links and soft links??

> Filesystem tagsAre there!

What I feel about the article is: OSes have these capabilities since long, where are the killers applications written for these?

dgreensp 10 hours ago 2 replies      
I love it, especially using structured data instead of text for the CLI and pipes, and replacing the file system with a database.

Just to rant on file systems for a sec, I learned from working on the Meteor build tool that they are slow, flaky things.

For example, there's no way on any desktop operating system to read the file tree rooted at a directory and then subscribe to changes to that tree, such that the snapshot combined with the changes gives you an accurate updated snapshot. At best, an API like FSEvents on OS X will reliably (or 99% reliably) tell you when it's time to go and re-read the tree or part of the tree, subject to inefficiency and race conditions.

"Statting" 10,000 files that you just read a second ago should be fast, right? It'll just hit disk cache in RAM. Sometimes it is. Sometimes it isn't. You might end up waiting a second or two.

And don't get me started on Windows, where simply deleting or renaming a file, synchronously and atomically, are complex topics you could spend a couple hours reading up on so that you can avoid the common pitfalls.

Current file systems will make even less sense in the future, when non-volatile RAM is cheap enough to use in consumer devices, meaning that "disk" or flash has the same performance characteristics and addressability as RAM. Then we won't be able to say that persisting data to a disk is hard, so of course we need these hairy file system things.

Putting aside how my data is physically persisted inside my computer, it's easy to think of better base layers for applications to store, share, and sync data. A service like Dropbox or BackBlaze would be trivial to implement if not for the legacy cruft of file systems. There's no reason my spreadsheets can't be stored in something like a git repo, with real-time sync, provided by the OS, designed to store structured data.

mwcampbell 11 hours ago 0 replies      
I'm glad the author thought about screen readers and other accessibility software. Yes, easy support for alternate input methods helps. But for screen readers in particular, the most important thing is a way to access a tree of objects representing the application's UI. Doing this efficiently over IPC is hard, at least with the existing infrastructure we have today.

Edit: I believe the state of the art in this area is the UI Automation API for Windows. In case the author is reading this thread, that would be a good place to continue your research.

benkuykendall 9 hours ago 0 replies      
The idea of system wide "document database" is really intriguing. I think the author identified a real pattern that could be addressed by such a change:

> In fact, many common applications are just text editors combined with data queries. Consider iTunes, Address Book, Calendar, Alarms, Messaging, Evernote, Todo list, Bookmarks, Browser History, Password Database, and Photo manager. All of these are backed by their own unique datastore. Such wasted effort, and a block to interoperability.

The ability to operate on my browser history or emails as a table would be awesome! And this solves so many issues about losing weird files when trying to back up.

However, I would worry a lot about schema design. Surely most apps would want custom fields in addition to whatever the OS designer decided constitutes an "email". This would throw interoperability out the window, and keeping it fast becomes a non-trivial DB design problem.

Anyone have more insights on the BeOS database or other attempts since?

(afterthought: like a lot of ideas in this post, this could be implemented in userspace on top of an existing OS)

Groxx 5 hours ago 0 replies      
>Consider iTunes. iTunes stores the actual mp3 files on disk, but all metadata in a private database. Having two sources of truth causes endless problems. If you add a new song on disk you must manually tell iTunes to rescan it. If you want to make a program that works with the song database you have to reverse engineer iTunes DB format, and pray that Apple doesn't change it. All of these problems go away with a single system wide database.

Well. Then you get Spotlight (on OSX, at least) - system-wide file/metadata/content search.

It's great! It's also quite slow at times. Slow (and costly) to index, slow to query (initial / common / by-name searches are fast, but content searches can take a second or two to find anything - this would be unacceptable in many applications), etc.

I like databases, but building a single well-performing one for all usages is quite literally impossible. Forcing everyone into a single system doesn't tend to add up to a positive thing.

hackermailman 8 hours ago 0 replies      
This guy wants GuixSD for 60% his feature requests, like isolated apps, version control, snapshots, ease of configuration, and ability to abstract all of it away, and Hurd for his multi-threaded ambitions, modularity, ability to do things like mount a database in a home directory to use as a fileserver, and message passing. This is slowly happening already https://fosdem.org/2017/schedule/event/guixhurd/

Then he wants to completely redesign a GUI to manage it all, which sounds a lot like Firefox OS with aware desktop apps, but with the added bonus that most things that req privileges on desktop OSs no longer need them with Guix. Software drivers are implemented in user space as servers with GNU Hurd, so you can now access these things and all the functionality that comes with them, exactly what the author wants.

diegof79 9 hours ago 1 reply      
What the author wants is something like Squeak. The idea behind Smalltalk wasn't to do a programming language, but a realization of the DynaBook (google for the essay "History Behind Smalltalk").

While I agree with the author that more innovation is needed on the desktop; I think that the essay is very disinformed.

For example, Squeak can be seen as an OS with very few layers: everything is an object, and sys calls are primitives. As user you can play with all the layers, and re-arrange the UI as you want.

So why the idea didn't took off? I don't know exactly (but I have my hypothesis). There are many factors to balance, those many factors are the ones that makes design hard.

One of those factors is that people tend to put the wrong priorities of where innovation should be. A good example is what the author mentions as priorities for him. None of the items mentions fundamental problems that computer users face today (from my perspective of course).

dgudkov 1 hour ago 0 replies      
Many interesting ideas and concepts, no question. However, if it was a startup pitch I would struggle to see a killer application. I can see features (some are very exciting!) here, but I'm failing to see a product. What kind of real-life problem would such OS solve? Is this problem worth billions of dollars required for developing a new OS and a tool kit of apps for it?
chrisleader 4 hours ago 0 replies      
"First of all, its quite common, especially in enterprise technology, for something to propose a new way to solve an existing problem. It cant be used to solve the problem in the old way, so it doesnt work, and proposes a new way, and so no-one will want that. This is how generational shifts work - first you try to force the new tool to fit the old workflow, and then the new tool creates a new workflow. Both parts are painful and full of denial, but the new model is ultimately much better than the old. The example I often give here is of a VP of Something or Other in a big company who every month downloads data from an internal system into a CSV, imports that into Excel and makes charts, pastes the charts into PowerPoint and makes slides and bullets, and then emails the PPT to 20 people. Tell this person that they could switch to Google Docs and theyll laugh at you; tell them that they could do it on an iPad and theyll fall off their chair laughing. But really, that monthly PowerPoint status report should be a live SaaS dashboard thats always up-to-date, machine learning should trigger alerts for any unexpected and important changes, and the 10 meg email should be a Slack channel. Now ask them again if they want an iPad." - Benedict Evans
microcolonel 9 hours ago 0 replies      
> Why can't I have a file in two places at once on my filesystem?

You can! Use hardlinks.

> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

There are well established standards for controlling window managers from programs, what on earth are you talking about?

> Applications would do their drawing by requesting a graphics surface from the compositor. When they finish their drawing and are ready to update they just send a message saying: please repaint me. In practice we'd probably have a few types of surfaces for 2d and 3d graphics, and possibly raw framebuffers. The important thing is that at the end of the day it is the compositor which controls what ends up on the real screen, and when. If one app goes crazy the compositor can throttle it's repaints to ensure the rest of the system stays live.

Just like Wayland!

> All applications become small modules that communicate through the message bus for everything. Everything. No more file system access. No hardware access. Everything is a message.

Just like flatpak!

> Smart copy and paste

This is entirely feasible with the current infrastructure.

> Could we actually build this? I suspect not. No one has done it because, quite honestly, there is no money in it. And without money there simply aren't enough resources to build it.

Some of this is already built, and most of it is entirely feasible with existing systems. It's probably not even that much work.

Skunkleton 7 hours ago 0 replies      
In 2017 a modern operating system such as Android, iOS, or Chrome (the browser) exists as a platform. Applications developed for these platforms _must_ conform to the application model set by the platform. There is no supported way to create applications that do not conform to the design of the platform. This is in stark contrast to the "1984" operating systems that the OP is complaining about.

It is very tempting to see all the complexity of an open system and wish it was more straight forward; more like a closed system. But this is a dangerous thing to advocate. If we all only had access to closed systems, who would we be seceding control to? Do we really want our desktop operating systems to be just another fundamentally closed off walled garden?

jimmaswell 8 hours ago 0 replies      
Patently false that Windows hasn't innovated, UX or otherwise. Start menu search, better driver containment/other bsod reduction, multi-monitor expanding task bar, taskbar button reordering, other Explorer improvements, lots of things.
Animats 6 hours ago 0 replies      
If you want to study user interfaces, look at programs which solve a hard problem - 3D animation and design programs. Learn Inventor or Maya or Blender.

Autodesk Inventor and Blender are at opposite ends of the "use the keyboard" range. In Inventor, you can do almost everything with the mouse except enter numbers and filenames. Blender has a 10-page list of "hotkeys". It's worth looking at how Inventor does input. You can change point of view while in the middle of selecting something. This is essential when working on detailed objects.

zaro 11 hours ago 2 replies      
> I suspect the root cause is simply that building a successful operating system is hard.

Well, it is hard, but this is not the main source of issues. The obstacle to having nice things on the desktop is this constant competition and wheel reinvention, the lack of cooperation.

The article shows out some very good points, but just think of this simple fact. It's 2017, and the ONLY filesystem that will seamlessly work with macOS, Windows and Linux at the same time is FAT, a files system which is almost 40 years old. And it is not because it is so hard to make such a filesystem. Not at all.Now this is at the core of reasons why we can't have nice things :)

IamCarbonMan 6 hours ago 0 replies      
All of this is possible without throwing out any existing technology (at least for Linux and Windows; if Apple doesn't envision a use case for something it's very likely never going to exist on their platform). Linux compositors have the ability to manipulate the window however the hell they want, and while it's not as popular as it used to be, you can change the default shell on Windows and use any window manager you can program. A database filesystem is two parts: a database and a filesystem. Instead of throwing out the filesystem which works just fine, add a database which offers views into the filesystem. The author is really woe-is-me about how an audio player doesn't have a database of mp3s, but that's something that is done all the time. Why do we have to throw out the filesystem just to have database queries? And if it's because every app has to have their own database- no they don't. If you're going to rewrite all the apps anyways, then rewrite them to use the same database. Problem solved. The hardest concept to implement in this article would be the author's idea of modern GUIs, but it can certainly be done.

On top of this, the trade-off of creating an entirely new OS is enormous. Sure, you can make an OS with no apps because it's not compatible with anything that's been created before, and then you can add your own editor and your own web browser and whatever. And people who only need those things will love it. But if you need something that the OS developer didn't implement, you're screwed. You want to play a game? Sorry. You want to run the software that your school or business requires? Sorry. Seriously, don't throw out every damn thing ever made just to make a better suite of default apps.

ZenPsycho 1 hour ago 0 replies      
this runs parallel to a lot of my thoughts. one thing that you don't quite address, and which i believe has derailed all efforts to do stuff like this, is the challenge of getting a large group of developers to agree on a single set of data formats. it is only once you nail that, that doing many of the composition/copy/paste things become possible. some of these formats are easy: jpeg, png, utf-8. when it comes to something like: the meta data schema for a song? a recipe? that's a can of worms and flamewars.

to some extent you've got the DBFS thing that everything shares but that's only of use for sharing so far as you can get easy agreement about what fieldnames should he available for a kind of thing.

you've also got security concerns. if everything shares the same database, any random bit of code can ship that data off to a russian data mining op. or corrupt your song database. or encrypt everything and ransom it. you kind of address this by puttin a layer of indirection here, and having security and access managed via the message bus, but this needs a UI, and i don't think apple, android, or facebook has really mastered the ui for permissions.

casebash 1 hour ago 0 replies      
I wouldn't say that innovation in Desktop is dead, but most of it seems to be driven by features or design patterns copied from mobile or tablet. Take for examples Windows 8 and Windows 10, Windows 8 was all about moving to an OS that could run on a whole host of devices, while Windows 10 was all about fixing up all the errors made in this transition.
joshmarinacci 10 hours ago 0 replies      
OP here. I wasn't quite ready to share this with the world yet, but what are you gonna do.

I'm happy to answer your questions.

vbezhenar 10 hours ago 1 reply      
I think that the next reboot will be unifying RAM and Disk with tremendous amount of memory (terabytes) for apps and transparent offloading of huge video and audio files into cloud. You don't need filesystem or any persistence anymore, all your data structures are persistent. Use immutable stuff and you have unlimited Undo for the entire device life. Reboot doesn't make sense, all you need is to flush processor registers before turning off. This experience will require rewrite OS from ground up, but it would allow for completely new user experience.
lou1306 11 hours ago 4 replies      
Windows 10 didn't add any UX feature? What about Task View (Win+Tab) and virtual desktops?

And why bashing the Linux subsystem, which is surely not even developed by the UX team (so no waste of resources) and is a much needed feature for developers?

BTW, there is a really simple reason why mainstream OSs have a rather conservative design: the vast majority of people just doesn't care and may even get angry when you change the interaction flow. Many of the ideas exposed in the post are either developer-oriented or require significant training to be used proficiently.

nebulous1 7 hours ago 0 replies      
I much preferred the second half of this to the first half.

However, both seemed to end up with the same fundamental flaw: he's either underestimating or understating how absurdly difficult most of what he's suggesting is. It's all well and good saying that we can have a standardized system for email, with everything being passed over messages, but what about everything else? It's extremely difficult to standardize an opinionated system that works for everything, which is exactly why so many operating system constructs are more general than specific. For this to all hang together you would have to standardize everything, which will undoubtedly turn into an insane bureaucratic mess. Not to mention that a lot of software makers actively fight against having their internal formats open.

mherrmann 9 hours ago 1 reply      
What I hate is the _bloat_. Why is GarageBand forced upon me with macOS? Or iTunes? Similarly for video players etc on all the other OSs. I am perfectly capable of installing the software I need, thank you very much.
thibran 5 hours ago 0 replies      
Interesting to read someone else ideas about that topic, which I though myself quite a lot about. The basic building block of a better desktop OS is IMHO and as the OP wrote a communication contract between capabilities and the glue (a.k.a apps). I don't think we would need that many capability-services to be able to build something useful (it doesn't even need to be efficient at first). For the start it might be enough to wrap existing tools and expose them and see if things work or not.

Maybe by starting to build command-line apps and see how good the idea works (cross-platform would be nice). I guess that the resulting system would have some similarities with RxJava, which allows to compose things together (get asynchronously A & B, then build C and send it to D if it contains not Foo).

If an app would talk to a data-service it would no longer have to know where the data is coming from or how it got there. This would allow to build a whole new kind of abstractions, e.g. data could be stored in the cloud and only downloaded to a local cache when frequently used, just to be later synced back to the cloud transparently (maybe even ahead of time because a local AI learned your usage patterns). I know that you can have such sync-things today, they are just complicated to setup, or cost a lot of money, or work only for specific things/applications, also they are often not accessible to normal users.

Knowing how to interact with the command-line gives advanced users superpowers. I think it is time to give those superpowers to normal users too. And no, learning how to use the command-line is not the way to go ;-)

A capability-services based OS could even come with a quite interesting monetization strategy by selling extra capabilities, like storage, async-computation or AI services, beside of selling applications.

jonahss 9 hours ago 1 reply      
The author mentions they wished Object-based streams/terminals existed. This is the premise of Windows Powershell, which today reminds me of nearly abandoned malls found in the Midwest: full of dreams from a decade ago, but today an empty shell lacking true utility, open to the public for wandering around.
ksec 11 hours ago 3 replies      
I hate to say this, but an ideal Desktop OS, at least for majority of consumers is mostly here, and it is iOS 11.

Having use the newest iPad Pro 10.5 ( along with iOS 11 beta ), the first few hours were pure Joy, after that were frustration and anger flooding in. Because what I realize, is this tiny little tablet, costing only half a Macbook Pro or even iMac, limited by Fanless design with lower TDP, 4GB of memory, no Dedicated GPU, likely much slower SSD, provides a MUCH better user experience then the Mac or Windows PC i have ever used, that is including the latest Macbook Pro.

Everything is fast and buttery smooth, even the Web Browsing experience is better. The only downside is you are limited touch screen and Keyboard. I have number of times wonder If I can attach a separate monitor to use it like Samsung Desktop Dock.

There are far too many backward compatibility to care for with both Windows and Mac. And this is similar to the discussion in the previous Software off Rails. People are less likely to spend time optimizing when it is working good enough out of the box.

agumonkey 11 hours ago 1 reply      
I see https://birdhouse.org/beos/refugee/trackerbase.gif for 2 seconds and I feel happy. So cute, clear, useful.
st3fan 9 hours ago 1 reply      
> And if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins. There is no extension API. This is the result of many layers of cruft and bloat.

I am going to say that it is probably a product decision in case of Mail.app.

Whether Mail.app is a big steaming pile of cruft and bloat inside - nobody knows. Since it is closed source.

gshrikant 8 hours ago 1 reply      
While I'm not sure I agree with everything in the article, it does mention a point I've been thinking about for a while - configuration.

I really do think applications should try to zero-in on a few standard configuration file formats - I really don't have a strong preference on one (although avoiding XML would be nice). It makes the system uniform and makes it easier to move between applications. Of course, applications can add extended sections to suit their need.

Another related point is the location of configuration files - standard Linux/Unix has a nice hierarchy /etc/ for and /usr/local/etc and others for user-specific configurations (I'm sure Windows and OS X should have a similar hierarchy too) but different applications still end up placing their configuration files in unintuitive places.

I find this lack of uniformity disturbing - especially because it looks so easy (at least on the surface) to fix and the benefits would be nice - easier to learn and scriptable.

A last unrelated point - I don't see why Linux distributions cannot standardize around a common repository - Debian and Ubuntu both share several packages but are yet forced to maintain separate package databases and you can't easily mix and match packages between them. This replication of effort seems more ideological than pragmatic (of course, there probably are some practical reasons too). But I can't see why we can't all pool resources and share a common 'universal' application repository - maybe divide it into 'Free', 'Non-Free', 'Contrib/AUR' like granular divisions so users have full freedom to choose the packages they want.

Like other things, I think these ideas have been implemented before but I'm a little disappointed these haven't made it into 'mainstream' OS userlands yet.

bastijn 7 hours ago 1 reply      
Apart from discussing the content. Can I just express my absolute love for (longer) articles that start with a tl;dr?

It gives an immediate answer to "do I need to read this?", and if so, what key arguments should I pay attention to?

Let me finish with expressing my thanks to the author for including a tl;dr.


saagarjha 4 hours ago 0 replies      
> if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins.

Mail.app supports plugins.

> Why can't I have a file in two places at once on my filesystem?

Soa hardlink?

> Why don't my native apps do that?

Dynamic text lets you do this, but it's mobile-only currently.

> have started deprecating the Applescript bindings which make it work underneath

Since when?

oconnor663 8 hours ago 1 reply      
> Wayland is supposed to fix everything, but it's been almost a decade in development and still isn't ready for prime time.

Mutter's Wayland implementation is the default display server for Gnome Shell right now. How much more prime time do can you get?

doggydogs94 2 hours ago 0 replies      
FYI, most of the author's complaints about the command line were addressed by Microsoft in PowerShell. For example, PowerShell pipes objects, not text.
atemerev 9 hours ago 0 replies      
"A solution in search of a problem".

What problem of mine "piping my Skype stream to video analysis service" is supposed to solve? Why would I want to dock and undock different application parts to all places they don't belong? Etc.

blueworks 10 hours ago 0 replies      
The reference to atom and it's performance to the underlying electron and nodejs runtime is inappropriate since another popular editor Microsoft's VS Code which also uses electron but is very fast and is a pleasure to work with.
pier25 10 hours ago 0 replies      
I agree with some of the points stated. For years I've been thinking that a tag based file system would be superior to a folder based one in many aspects.

macOS has tags, but the UX/UI for interacting with them is really poor.

sddfd 9 hours ago 2 replies      
I think electron is a step into the right direction.

Let's assume for a moment there weren't the problem with JavaScript performance (because, for example, web assembly can replace it).

Then electron is a platform everyone can build his applications on. And once that happens, operating systems are free to shed the library cruft.

This is just one possible migration path, and I am not saying it's going to happen or that it is even a good idea.

But if you have to write cross platform apps it seems, that this has clear advantages.

PrimHelios 10 hours ago 2 replies      
This seems to me to be written by someone who uses MacOS almost exclusively, but has touched Windows just enough to understand it. The complete lack of understanding of IPC, filesystems, scripting, and other OS fundamentals is pretty painful.

>Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven't built it because it's not a priority.

I'm an idiot when it comes to operating systems (and sometimes even in general), but even I know why there are issues with that. You need a standardized form of IPC between the two apps, which wouldn't happen because both devs would be convinced their way is the best. On top of that, it's a great way to get an antitrust against you if you aren't careful [0]

>Why can't I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical?

Soft/hard links, fam. Even Windows has them.

>Why can['t] I sort by tags and metadata?

You can in Linux, you just need to know a few commands first.

>Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don't my native apps do that? Why can't I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.

Decent point IMO. There's a lot of native UI I have a hard time reading because it's so small. That said, I think bringing in the ability to zoom native widgets would bring in a lot of issues that HTML apps have.

>We should start by getting rid of things that don't work very well.

The author doesn't understand PCs. The entire point of these machines is backwards-compatibility, because we need backwards compatibility. I'm sitting next to a custom gaming PC and I have an actual serial port PCIe card because I need serial ports. Serial ports. In 2017. I'd be screwed if serial wasn't supported anymore.

I won't touch the rest of the article because I there's a lot I disagree with, but he seems to just want to completely reinvent the "modern OS" as just chromebooks.

[0]: https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....

al2o3cr 9 hours ago 0 replies      

 Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.
My copy of Divvy is confused by this statement. :)

linguae 10 hours ago 2 replies      
I've been thinking a lot about the problem of modern desktop operating systems myself over the past year. I believe that desktop operating system environments peaked last decade. The Mac's high water mark was Snow Leopard, the Linux desktop appeared to have gained momentum with the increasing refinement of GNOME 2 during the latter half of the 2000's, and for me the finest Windows releases were Windows 2000 and Windows 7. Unfortunately both the Linux desktop and Windows took a step in the wrong direction when smartphones and tablets became popular and the maintainers of those desktops believed that the desktop environments should resemble the environments of these new mobile devices. This led to regressions such as early GNOME 3 and Windows 8. GNOME 3 has improved over the years and Windows 10 is an improvement over Windows 8, but GNOME 2 and Windows 7, in my opinion, are still better than their latest successors. Apple thankfully didn't follow the footsteps of GNOME and Windows, but I feel that the Mac has stagnated since Snow Leopard.

I agree with the author of this article that desktop operating systems should develop into workstation operating systems. They should be able to facilitate our workflows, and ideally they should be programmable (which I have some more thoughts about in my next paragraph). In my opinion the interface should fully embrace the fact that it is a workstation and not a passive media consumption device. It should, in my opinion, be a "back to basics" one, something like the classic Windows 95 interface or the Platinum Mac OS interface.

One of the thoughts that I've been thinking about over the years is the lack of programmability in contemporary desktop GUIs. The environments of MS-DOS and early home computers highly encouraged users to write programs and scripts to enhance their work environment. Unix goes a step further with the idea of pipes in order to connect different tools together. Finally, the ultimate form of programmability and interaction would resemble the Smalltalk environment, where objects could send messages to each other. What would be amazing would be some sort of Smalltalk-esque GUI environment, where GUI applications could interact with each other using message passing. Unfortunately Apple and Microsoft didn't copy this from Xerox, instead only focusing on the GUI in the early 1980s and then later in the 1980s focusing on providing an object-oriented API for GUI services (this would be realized with NeXTSTEP/OPENSTEP/Cocoa, which inspired failed copycat efforts such as Microsoft Cairo and Apple/IBM Taligent, but later on inspired successful platforms such as the Java API and Microsoft .NET). The result today is largely unprogrammable GUI applications, though there are some workarounds such as AppleScript and Visual Basic for Applications (though it's far from the Smalltalk-esque idea). The article's suggestion for having some sort of standardized JSON application interface would be an improvement over the status quo.

I would love to work on such an operating system: a programmable GUI influenced by the underpinnings of Smalltalk and Symbolics Genera plus the interface and UI guidelines of the classic Mac OS. The result would be a desktop operating system that is unabashedly for desktop computer users. It would be both easy to use and easy to control.

michaelmrose 5 hours ago 0 replies      
"Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs."

What does this mean? Some can be via IPC

jonahss 9 hours ago 1 reply      
Look to Mobile OSs for innovation in OS design. Like the author stated, it's currently where the money is. It's the closest we have to "starting over" and alot of things were rethought, such as security and sandboxed apps. IPC is limited to start, but slowly growing.

I wouldn't be surprised if the workstation OSs of the future grew out of our current Mobile OSs

hyperfekt 8 hours ago 0 replies      
This would be neat, but isn't radical enough yet IMHO. If everything on the system is composed of pure functions operating on data, we can supercharge the OS and make everything both possible AND very simple.The whole notion of 'application' is really kind of outmoded.
Zigurd 48 minutes ago 0 replies      
A few years ago I wrote a book about developing big complex networked apps. It had "Enterprise" in the title, based on the idea that mobile device OSs would become dominant - which they did - and that the evolution of tablet devices would continue to where powerful devices like the iPad Pro would overtake the use of Mac and Windows laptops - which they didn't.

Windows and MacOS are full of compromises but are usable. Chrome OS is a contender for users that need a simpler system. What addressable segment is left? You pretty much have to make the case for replacing Windows. But you can only hope to replace the "voluntary" Windows seats. Many Windows users have no choice.

meesterdude 8 hours ago 1 reply      
> I could take a snapshot of a screen. This would store the current state of everything, even my keybindings. I can continue working, but if I want I could rollback to that snapshot.

we can already do this with virtualization (and i make use of it extensively)

jrs95 4 hours ago 0 replies      
I'm not really sure if a system wide document database is an improvement over Core Data or not...
jokoon 8 hours ago 0 replies      
There are really millions of small things that I would make in a new desktop OS.

First would be to forget the whole idea of resizable windows. Windows should only tile automatically. A tab interface have shown that a simple task bar is just enough.

File explorers would have their columns be resized automatically... I can't believe how both OS X and windows 10 still have this wrong.

Ultimately I would let applications use hardware directly instead of relying on how the OS do things. This would increase cross compatibility and developer freedom. Good bye Qt and all those horrors of the past.

Not to mention how there are millions of small utilities and functionalities like windirstat, foobar2000 that would be ideal to make the OS a little more useful.

djhworld 6 hours ago 0 replies      
I like the optimism in this post, there's are lot of dismissive comments on here.

However I just don't think any of the ideas would ever really function. The idea of letting you pipe your Skype video feed to some video analysis tool would never happen

Similarly you'd never get application developers to open up their apps in such a way where you can extract/import content.

coldtea 11 hours ago 0 replies      
>In fact, in some cases it's worse. It took tremendous effort to get 3D accelerated Doom to work inside of X windows in the mid 2000s, something that was trivial with mid-1990s Microsoft Windows. Below is a screenshot of Processing running for the first time on a Raspberry Pi with hardware acceleration, just a couple of years ago. And it was possible only thanks to a completely custom X windows video driver. This driver is still experimental and unreleased, five years after the Raspberry Pi shipped.

That's because of Open Source OSes though, which vendors don't care about and volunteers aren't enough and able to match the work needed for all things to play out of the box. Nothing about this particular example has anything to do with OS research or modern OSes being behind.

>Here's another example. Atom is one of the most popular editors today. Developers love it because it has oodles of plugins, but let us consider how it's written. Atom uses Electron, which is essentially an entire webbrowser married to a NodeJS runtime. That's two Javascript engines bundled up into a single app. Electron apps use browser drawing apis which delegate to native drawing apis, which then delegate to the GPU (if you're luck) for the actual drawing. So many layers.

Again, nothing related to modern OSes being inadequate. One could use e.g. Cocoa and get 10x what Electron offers, for 10x the speed, but it would be limited in portability.

>Even fairly simple apps are pretty complex these days. An email app, like the one above is conceptually simple. It should just be a few database queries, a text editor, and a module that knows how to communicate with IMAP and SMTP servers. Yet writing a new email client is very difficult and consumes many megabytes on disk, so few people do it.

First, I doubt one of the reasons "few people do it" is because it "consumes many megabytes on disk" (what? whatever).

Second, the author vastly underestimates how hard it is handling protocols like IMAP, or writing a "text editor" that can handle all the subtleties of email (which include almost a whole blown HTML rendering). Now, if he means 'people should be able to write an emailer easily iff all constituent parts where available as libraries and widgets', then yeah, duh!

>Mac OS X was once a shining beacon of new features, with every release showing profound progress and invention. Quartz 2D! Expose! System wide device syncing! Widgets! Today, however Apple puts little effort into their desktop operating system besides changing the theme every now and then and increasing hooks to their mobile devices.

Yeah, and writing a whole new FS, a whole new 3D graphics stack, memory compression, seamless cloud file storage, handoff, move to 64-bit everything, bitcode, and tons of other things besides. Just because they are not shiny, doesn't mean there are no new futures there.

>A new filesystem and a new video encoding format. Really, that's it?

Yeah, because a new FS is so trivial -- they should also rewrite the whole kernel at the same time, for extra fun.

>Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven't built it because it's not a priority.

There's also no real reason this should be offered. Or that it should be a priority. If every possible feature someone might thing was "a priority" OSes would be horrible messes.

>Why can't I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical? Why can I sort by tags and metadata?

Note how you can do all those things in OS X (you can have aliases and symlinks and hard links, can add tags and metadata, and can sort by them). And in Windows I'd presume.

And it's "fundamentally hierarchical" because that's how we think about stuff. But it also offers all kind of non hierarchical views, Spotlight and Tags based views for one.

>Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don't my native apps do that? Why can't I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.

Because bitmap assets. Suddenly all those things are not so "trivial".

There are good arguments to be made about our OSes being held back by legacy cruft (POSIX for one) and new avenues to explore, old stuff that worked better than what we have now, etc.

But TFA is not making them.

osteele 8 hours ago 1 reply      
This could be the first half of a good article. It's a list of things the author cares about that variously: aren't (yet) possible, aren't a good idea, nobody cares enough to make happen, or (maybe) indicate a market force failure.

What would make this interesting (to me) is a discussion not that these features don't exist, but why.

maxekman 7 hours ago 0 replies      
The suggested OS sounds a lot like Plan 9 to me.
zvrba 10 hours ago 1 reply      
This sounds like a rant from a person not really acquainted with operating systems.

> Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps?

How would this even be semantically meaningful? What about top-level components like menus which are completely different?

> Why can't I have a file in two places at once on my filesystem?

Umm, soft and hard links do exactly that.

> Why can't I speak commands to my computer

Cortana takes a shot at that. Personally, I don't even want to try out the feature until it has the level of comprehension corresponding to a human. Otherwise, I'll just be guessing how to spell out my sentences / commands..

> or have it watch as I draw signs in the air, or better yet watch as I work to tell me when I'm tired and should take a break.

Because these are hard problems in computer vision, unrelated to operating systems.

> Each application has its own part of the filesystem

Yes, I wouldn't want to give up on that. It's orderly.

> its own config system, and its own preferences, database

Well, Windows unifies this in the registry. It's somewhat unpopular.

> Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need.

NTFS can store extended metadata + arbitrary data in alternate data streams. Doesn't seem to be used very much.

> I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.

The video stream is a stream of bytes. Skype interprets it and constructs a video from that byte stream. Does he suggest that this interpreter should be part of the kernel? That there is one single video streaming protocol that fits all purposes?

> Native Applications are heavy weight,

Um? I have yet to see a "non-native" application that is as snappy as a native one.

> take a long time to develop and very siloed.

Any application takes a long time to develop. If you care about stability, crash recovery, etc.

> Wouldn't it be easier to build a new email client if the database was already built for you?

Exists, integrated in the Windows OS: https://en.wikipedia.org/wiki/Extensible_Storage_Engine

> The UI would only be a few lines of code.

It's logic behind the UI that's complicated, not building the UI itself (heck, you can just draw it if you use C# or VB).

> If you want to make a program that works with the song database you have to reverse engineer iTunes DB format

Even if the hypothetical document DB existed, how would one program know about the schema of other programs? Or schema versioning, or...? The problems with proprietary formats won't just disappear, it'll just become easier to do the wrong thing based on misinterpretation of the other program's schema.

> Message Bus [...] All applications become small modules that communicate through the message bus for everything.

COM, DCOM, CORBA... The first two are made user-friendly on Windows by C#. Don't know whether it's possible to snoop on COM messages, but given the thickness of the documentation on COM, I'd say the answer is "yes".

> However, this also means we have to rebuild everything from scratch.

Yes. Windows already exposes an insane amount of helper objects as COM components.

> You could build a new email frontend in an afternoon...

In which alternate universe?

> I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer, like serialized object streams (think JSON but more efficient).

He should read up on Powershell. It's also extensible and can directly invoke COM components (+ all of the .net framework).

> System Side Semantic Keybindings

That one may be original. I think KDE has something like this.

> The clipboard should be visible on screen as some sort of a shelf that shows the recent items I've copied.

IIRC, I've seen something like this in KDE. Earlier versions of Windows had some "clipboard manager" too, though it seems to have disappeared in new versions. Plenty of freeware ones though.

> In the new system all applications are tiny isolated things which only know what the system tells them.

That's how Windows UWP applications behave. Appstore ones too. IIRC, some old, then-mainstream OS-es tried the kind of separation and it didn't work well with users. Sometimes you want to share data between isolation domains.

> None of this is New

No, and it seems that, feature-wise, Windows is closest to his dream OS. Now he just needs to convince programmers to use the features that are already there :-)

nkristoffersen 7 hours ago 0 replies      
Sounds like he should be using iOS honestly.
d4r114 9 hours ago 0 replies      
PJON could fit as the message bus the author describes
jackcosgrove 8 hours ago 1 reply      
How does this new OS handle backwards compatibility?

I've always thought the next evolution of the OS was to be a hypervisor for application containers that can communicate via a common message bus.

tomc1985 6 hours ago 0 replies      
I'm getting to the point where Medium articles with stock imagery are instantly ignored.
Xorlev 8 hours ago 0 replies      
I believe understand the vision that the author is trying to paint. I don't think he's alone, but the reality is that building a full OS is a pretty massive undertaking. Additionally, his idea of simplicity may be complexity for others. I want to explore this a bit from a point of optimism, because it's very very easy to find flaws in a manifesto that desires to redesign an operating system.

There's a lot of interesting experimentation in OS-land. BeOS (and it's successor, Haiku [1]) are called out explicitly by the author. BeOS/Haiku use this idea of apps as modules to expose functionality across a message bus. Redox OS (A Rust OS), is built on the microkernel concept. These are both kind of on the fringe at the moment, so let me bring up one more platform that many of us use daily: the modern web browser.

Chrome (and Firefox, ChromeOS, etc.) actually do take many of these concepts to heart. Now, I know more about Chrome than Firefox or ChromeOS, so let me set those aside for a moment.

- "Everything done via a message bus." This is Chrome extensions in a nutshell.- "Dockable tabs in any window."- "A CLI with structured data." Sorta Chrome debugger+JS. With some effort, this could be a lot more powerful. The author's desire to pipe a video call to an analysis service is a fairly tough requirement here and obviously wouldn't fly in Chrome either, but that isn't to say that it'd be impossible.- "A built-in document database." (IndexedDB)- "Working sets." Chrome profiles -- try them!- "Apps become Modules." This is more of a miss, but if you squint enough through a powerful enough lens, the APIs exposed by Chrome to extensions/webpages are a lot like this. That said, given that everything on Chrome is more site-centric vs. computer-centric, things are namespaced vs. Spotify being able to execute arbitrary queries for MP3s.

Now, I'm not going to say that Chrome is IdealOS. There is much from that vision that's missing. And I'd also say that webapps just aren't always an acceptable substitute for native applications. Through massive wastes of computing power, we are getting closer (see: Slack, Atom, all things Electron). We aren't there yet. I'll always take a native app if it's written decently.

It seems to me like in general much of this vision is being expressed through disparate efforts, but only a few are tackling the idea of replacing the full OS. Chrome seems best poised in many ways because it's already on your existing OS. Yes, it's having to use the underlying OS' APIs and such, but you can argue it's just one more layer. ChromeOS seems to do a pretty good job of eliminating even that.

In general, I'm excited to see discussion on operating systems. The OSes used by the general public are already here: Android and iOS. It's up to us to build a better future for those of us using workstations

Disclaimer: I do not work on Chrome, but do work for its parent organization. My views in no way reflect that of my employer's.

[1] https://en.wikipedia.org/wiki/Haiku_(operating_system)[2] https://en.wikipedia.org/wiki/Redox_(operating_system)

kyberias 6 hours ago 0 replies      
So much incorrect stuff in the text, I stopped reading.
nickpsecurity 11 hours ago 3 replies      
The author keeps questioning why certain siloing like App Store happens. The author then offers technical solutions that won't work. The reason is the siloing is intentional on part of companies developing those applications to reduce competition to boost profits. They'd rather provide the feature you desire themselves or through an app they get 30% commission on.

A lot of other things author talks about keep the ecosystems going. The ecosystems, esp key apps, are why many people use these desktop OS's. Those apps and ecosystems take too much labor to clean slate. So, the new OS's tend not to have them at all or use knock-offs that don't work well enough. Users think they're useless and leave after the demo.

The market effects usually stomp technical criteria. That's why author's recommendations will fail as a whole. "Worse Really is Better" per Richard Gabriel.

Ezhik 6 hours ago 0 replies      
I was throwing out hypotheticals with a couple of friends a few days back. One problem that I felt things like Samsung DeX and Windows Continuum were trying to solve was the fact that all your devices are ultimately separate computers.

Your currently open apps, configuration, even things like your wallpaper - are still ultimately different across your devices. Each device has its own state, and while with things like cloud file syncing and Pushbullet and etc, you can make your devices at the very least aware of each other, in the end, they still have separate states.

The endgame would be to just have a single state, period. Your computer would be every device you have. You would be able to drag a window from your phone to your desktop to your HoloLens. Every file you have in your life, is always with you.

But that's the faraway future.

Something possible with today's hardware (but not software), however, would be to have phones with smart docks. Instead of just being hubs to connect the phone to a screen and a keyboard, they would also have processing power, and be proper computers in their own right to which the phone would be able to offload complex computations. But I'm thinking it should be less like an external GPU dock, and more like a server for remote compilation or video rendering. This way, for example, you'd even be able to do things like starting to render a video while your phone is docked, then undocking while the video is still rendering on the dock, or you could launch a game that runs on the dock and is controlled from your phone - something like AirPlay, but the processing takes place on the dock. So ultimately, while you still have multiple computers, there is only a single state, which is on your phone.

The software is the hard part here. We can build a smartphone and a smart dock, and have a fast enough data protocol to transfer content between each other through USB-C. But who will write the OS? Where do you get the apps? Why would Adobe bother porting After Effects to run on a phone of all things, and then also restructure it to be aware of the whole smart dock concept, when After Effects can do something like this today as it is? Why would game developers bother writing their games in a way that specifically supports this dock paradigm when they can get the same general idea on the Nintendo Switch for free? And so on.

This, just like OP's idea, would take a reboot. The problem with a reboot is that it's a reboot. You cannot do that. Microsoft cannot do it, which is why Windows 10 still runs very old software. Apple can't do it, which is why Carbon was a thing. Linux can't do that, because Red Hat and Canonical will not throw their customers under a bus.

But still, it's fun to daydream. Being told to stop even imagining the impossible is not exactly going to help innovation.

pvdebbe 10 hours ago 0 replies      
Complecting a GUI into an OS doesn't sound very ideal to me.
ageofwant 9 hours ago 1 reply      
Most all the author craves for can be cobbled together from existing components. On Linux at least. If you don't use Linux you have bigger issues to deal with first.

He can start by using a tiling window manager, like i3.

"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so."

romanovcode 11 hours ago 4 replies      
> In the screenshot below the user is lifting the edge of a window up to see what's underneath. That's super cool!

> https://joshondesign.com//images2/lift-window.png

Is this a sarcasm? Because it is complete garbage.

Opioid makers made payments to one in 12 U.S. doctors brown.edu
261 points by metheus  2 days ago   98 comments top 17
lr4444lr 2 days ago 2 replies      
Maybe it's because Americans just have this cognitive dissonance that their trusted doctor could be any less than 100% conscientious about their health, but we need to plainly face the fact that if members of the press were able to write exposs about drug makers' fudging the data about the addictiveness and effectiveness of their products, that doctors with their medical training and responsibility over actual people's lives should have proceeded with more caution and not written scripts mindlessly to get rid of every tiny pain patients had just because they kept asking for something. It's just unconscionable.

EDIT: this survey was also very damning: http://www.chicagotribune.com/news/local/breaking/ct-prescri...

elipsey 2 days ago 3 replies      
Reminds me of what Rostand said about murder: "Kill one man, and you are a murderer. Kill millions of men, and you are a conqueror. Kill them all, and you are a god."

Sell one oxycontin and you're drug dealer; sell a million and you're a C level.

lootsauce 2 days ago 0 replies      
I have two relatives that died from prescription opioid addiction and abuse and I don't think a few payments here and there is what motivates doctors to prescribe these drugs at a higher rate. Maybe it does maybe not. The fact is they are powerful drugs that can stop pain AND they make LOTS of money so they get pushed as the best option.

The thing that is in question in a doctors mind is, can I say this is the best option. Thats what the face-time with reps, meals, conferences etc are doing, giving the MD a perception that this is best practice. It's the professional cover to prescribe what everyone knows is a highly addictive and dangerous narcotic.

If the same kind of money were spent on informing, reminding and reminding again, face-time with addiction prevention advocates, conferences on the opioid epidemic, payments for speaking on alternatives to opioids for pain treatment, giving doctors the facts about these drugs, the addiction and death rates, the impact on families and communities of the inevitable proportion of people who will become addicted and of those who will die, it will be much much harder to say this is a best practice.

But even then doctors are pushed hard to deal with as many patients as possible. A quick answer that deals with the immediate problem is what the patient wants and its all the doc has time and support from the system to give. This situation lends itself to the potential for those who truly benefit, the makers of these drugs, to take advantage of the situation and push drugs they know will make people addicted leading to higher use and profits. Lost lives and destroyed families be damned.

ransom1538 2 days ago 2 replies      
Feel free to browse doctors' opioid counts here. I was able to match them to their actual profiles. Take into account their field, but, even with that the numbers are ridiculous. If you are in "Family Practice" and prescribe opioids 9167 times per year you probably have a very sore hand.


ams6110 2 days ago 3 replies      
"the average payment to physicians was $15, the top 1 percent of physicians reported receiving more than $2,600 annually in payments"

Neither is enough to sway most physicians IMO. This seems to me like trying to stir up a scandal where there really isn't one.

I did hear on the radio today that 90% of prescription opiates are sold in USA and Canada, with the bulk of that being the USA. Other countries treat pain more holistically.

gayprogrammer 2 days ago 0 replies      
>> Q: What connection might there be between drug-maker payments to physicians and the current opioid use epidemic?

The article is pure speculation. They did not correlate the payments made to doctors with the prescriptions those doctors made, nor even more broadly with national prescription rates.

This article just makes the implied assumption that doctors push pills onto patients. I don't discount that at one time doctors may have been incentivized to play it fast and loose with pain pills, but those days are LONG gone now.

I would like to see research on the population in terms of predisposition to addiction and susceptibility to chemical dependence.

11thEarlOfMar 2 days ago 1 reply      
I don't like the 'pigs at the trough' image of this type of report. There are almost certainly pigs, but there is much more to resolving it than just revoking some licenses or throwing some people in jail.

Standard practice in business of all types is to take clients out for a meal to talk business. Usually, the meal setting enables a different type of legitimate, sober interaction. Many types of business are conducted this way. Some companies have policies that limit the value of what a salesperson can share with a client, for example, Applied Materials limits the value of any type of entertainment by a vendor to $100. This is good corporate policy to inhibit undue influence by vendors.

But it is not 'a payment'.

Likewise, it is pretty easy to see that pharma would want a Dr. who is prescribing their medication and has a positive story to tell to speak at one of their seminars. The Dr. might say that his time is worth $x, and the Pharma needs to cover his travel expenses, and then he'd consent to presenting. In this case, any fees paid would be considered payment. The question is, how much is being paid and does that payment present undue influence. Many doctors are independent contractors and can choose to do this type of activity without a policy to override or limit the value of it. On the other hand, state medical boards which license physicians should have policies that limit all medical and pharmaceutical companies in how they can influence physicians.

liveoneggs 2 days ago 3 replies      
jasonkostempski 1 day ago 0 replies      
Are there any rules that if a doctor has such a deal, it must be clearly expressed to the patient verbally and in writing? I think that would help not only deter doctors for making the deal at risk of being viewed as untrustworthy but also help people who blindly trust their doctor to maybe think twice before accepting their solution. I don't think there's a fix for the patients that just want the drug, and as long as they're informed, consenting adults, it should be their prerogative.
esm 1 day ago 1 reply      
Payments may affect prescribing, but I think that system factors count for more than many people realize. By way of an example, imagine the following case, which is reasonably common at the outpatient medicine office I am rotating through:

A 46 yo M with diabetes, hypertension, a 30 pack year smoking history, and low back pain that has been treated with oxycodone ever since a failed back operation 1.5 years ago presents to your office for routine follow-up. It's 10am, the hospital allots 15 minutes for routine appointments, and your next patient is in the waiting room. You are his physician -- what do you prioritize?

Smoking, diabetes, and hypertension are a perfect storm for a heart attack in the next 10 years, so how much time do you want to spend optimizing antihypertensive meds and glucose control? You could talk to him about quitting smoking, which is pretty high-yield since it would lower his cardiovascular and cancer risk. On the other hand, he doesn't seem particularly motivated to quit right now.

You would like to see him exercise more and eat better, since his blood sugars are not too bad yet, and you might be able to spare him daily insulin injections. But, his back pain is so bad that walking is difficult and exercise is out of the question. Tylenol and ibuprofen only "take the edge off". Oxycodone is the one thing that seems to really help. He asks you to refill his prescription, especially because "the pain is so bad at night, I can't sleep without it".

His quality-of-life is already poor, and it would become miserable if you took away his opioid script without providing some other form of pain control. You believe that he might benefit from physical therapy and time. He is willing to try PT, but he is adamant that he will not be able to "do all of the stretches and stuff" without taking oxycodone beforehand.

You now have 7 minutes to come up with a plan he agrees on (you're there to help him, after all), put in your orders, and read up on the next patient. How do you want to allocate your time? What if you suggest cutting down on his oxycodone regimen and he pushes back?

I don't know if there is a good answer. But these situations happen all the time, and someone has to make a decision. Most doctors are normal people. The different backgrounds, personalities, willingness to engage in confrontation or teaching, and varying degrees of concern for public health vs. individual patient needs, etc. lead to a variety of approaches. In the end, I think that pharma payments have a marginal effect on most doctors who have families, bosses, insurance constraints, a full waiting room, and are faced with the patient above.

refurb 2 days ago 8 replies      
This should be kept in context. Let's say the manufacturer presented new data at a conference. During that presentation they provided lunch and refreshments. Everyone of those doctors that attended will now show up in the CMS database.

Do we think that a $15 lunch is going to influence a physician to over-prescribe a drug?

robmiller 2 days ago 1 reply      
There is an irony here that the US invaded Afghanistan, the world's largest opium exporter[1].

[1] https://en.wikipedia.org/wiki/Opium_production_in_Afghanista...

ddebernardy 2 days ago 1 reply      
Is this really news? John Oliver ran a piece on the topic and the industry's many other dubious practices over 2 years ago, and I'm quite sure he wasn't the first to try to raise awareness.


vkou 2 days ago 2 replies      
Not related to payments, but related to opioids:

My father broke his thumb a few weeks ago, while operating a woodchipper. After getting a cast, he went to see a specialist, who recommended that K-wires be surgically installed - small metal rods that go into his thumb, until it heals, at which point they will be pulled out.

He got local anesthetic, got the wires installed, and got sent home. Because he lives in Canada, they gave him nothing for the pain. Two days later, the pain died down, and he's now waiting for the bones to heal.

In America, I can't imagine that doctor would get many positive reviews from his patients, for not prescribing painkillers. Market forces would push him towards over-prescribing... And statistically, some of his patients will become addicted.

zeep 2 days ago 0 replies      
And they tell them that their patients suffer from "pseudo-addiction" and should get more of the drugs...
CodeWriter23 2 days ago 0 replies      
If it walks like a marketing program and quacks like a marketing program, guess what...
oleg123 2 days ago 1 reply      
bribes - or payments?
Docker Is Raising Funding at $1.3B Valuation bloomberg.com
317 points by moritzplassnig  2 days ago   273 comments top 19
bane 2 days ago 5 replies      
I feel like this is one of those valuations which makes sense contextually, but not based on any sort of business reality.

Docker reminds me a lot of the PKZIP utilities. For those who don't remember, back in the late 80s the PKZIP utilities became a kind of defacto standard on non Unixes for file compression and decompression. The creator of the utilities was a guy named Phil Katz who meant to make money off of the tools, but as was the fashion at the time released them as basically feature complete shareware.

Some people did register, and quite a few companies registered to maintain compliance so PKWare (the company) did make a bit of money, but most people didn't bother. Eventually the core functionality was simply built into modern Operating Systems and various compatible clones were released for everything under the sun.

Amazingly the company is still around (and even selling PKZIP!) https://www.pkware.com/pkzip

Katz turned out to be a tragic figure http://www.bbsdocumentary.com/library/CONTROVERSY/LAWSUITS/S...

But my point is, I know of many many (MANY) people using Docker in development and deployment and I know of nobody at all who's paying them money. I'm sure they exist, the make revenue from somewhere presumably, but they're basically just critical infrastructure at this point and just becoming an expected part of the OS, not a company.

new299 2 days ago 12 replies      
I'm so curious to understand how you pitch Docker at a 1.3BUSD valuation. With I assume a potential valuation of ~10BUSD to give the investors a decent exit?

Does anyone have an insight into this?

Looks like Github's last valuation was at 2BUSD. That also seems high, but I can understand this somewhat better as they have revenue, and seem to be much more widely used/accepted than Docker. In addition to that I can see how Github's social features are valuable, and how they might grow into other markets. I don't see this for Docker...

locusofself 2 days ago 2 replies      
I used docker for a while last year and attended Dockercon. I was really excited about it and thought it was going to solve many of my problems.

But with how complicated my stack is, it just didn't make sense to use ultimately. I loved the idea of it, but in the end good old virtual machines and configuration management can basically do most of the same stuff.

I guess if you want to pack your servers to the brim with processes and shave off whatever performance hit you get from KVM or XEN, I get it.

But the idea of the filesystem layers and immutable images just kindof turned to a nightmare for me when I asked myself "how the hell am I going to update/patch this thing"

Maybe I'm crazy, but after a lot of excitement it seemed more like an extra layer of tools to deal with more than anything.

foota 2 days ago 3 replies      
My first reaction was that I was surprised it wasn't higher.

My second reaction was incredulity at how ridiculous my first reaction was.

raiyu 2 days ago 0 replies      
Monetizing open source directly is a bit challenging because you end up stuck in the same service model as everyone else. Which is basically to sell various support contracts to the fortune 100-500.

Forking a project into a enterprise (paid for) version and limiting those features in the original open source version, creates tension in the community, and usually isn't a model that leads to success.

Converting an open source project directly into a paid for software or SaaS model is definitely the best route as it reduces head count and allows you to be a software company instead of a service company.

Perhaps best captured by Github warpping git with an interface and community and then directly selling a SaaS subscription and eventually an enterprise hosted version that is still delivered on a subscription basis just behind the corporate firewall.

Also of note is that Github didn't create git itself, and instead was done on the direct need that developers saw themselves, which means they thought what is the product I want, rather than, we built and maintain git, so let's do that and eventually monetize it.

ahallock 2 days ago 5 replies      
Docker still has a long way to go in terms of local development ergonomics. Recently, I finally had my chance to on board a bunch of new devs and have them create their local environment using Docker Compose (we're working on a pretty standard Rails application).

We were able to get the environments set up and the app running, but the networking is so slow to be pretty much unusable. Something is wrong with syncing the FS between docker and the host OS. We were using the latest Docker for Mac. If the out of the box experience is this bad, it's unsuitable for local development. I was actually embarrassed.

z3t4 2 days ago 9 replies      
I dont understand containers. First you go to through great pain sharing and reusing libraries. Then you make a copy of all the libraries and the rest of the system for each program !?
throw2016 2 days ago 4 replies      
Docker generated value from the LXC project, aufs, overlay, btrfs and a ton of open source projects yet few people know about these projects, their authors and in the case of the extremely poorly marketed LXC project even what it is thanks to negative marketing by the Docker ecosystem hellbent on 'owning containers'.

Who is the author of aufs or overlayfs? Should these projects work with no recognition while VC funded companies with marketing funds swoop down and extract market value without giving anything back. How has Docker contributed back to all the projects it is critically dependent on?

This does not seem like a sustainable open source model. A lot of critical problems around containers exist in places like layers, the kernel and these will not get fixed by Docker but aufs or overlayfs and kernel subsystems but given most don't even know the authors of these projects how will this work?

There has been a lot of misleading marketing on Linux containers right from 2013 here on HN itself and one wishes there was more informed discussion that would correct some of this misinformation, which didn't happen.

eldavido 2 days ago 0 replies      
I wish people would stop talking about valuation this way, emphasizing the bullshit headline valuation.

The reality is that (speculating), they probably issued a new class of stock, at $x/share, and that class of stock has all kinds of rights, provisions, protections, etc. that the others don't, and may or may not have any bearing whatsoever on what the other classes of shares are worth.

Steeeve 2 days ago 1 reply      
The guy who came up with chroot in the first place is kicking himself.
kev009 2 days ago 1 reply      
Do they actually have any significant revenue? I love developer tools companies, but there are several tools upstarts that have no proven business model. They look like really bad gambles in terms of VC investment, unless you can get in early enough to unload to other fools.
vesak 16 hours ago 0 replies      
Check out Chef's https://habitat.sh for one fresher take on all this. It moves the containerization approach closer to something that feels like Arch Linux packaging, with a pinch of Nix-style reproducibility. Looks very promising at this point, even if a bit rough on the edges still.
contingencies 2 days ago 1 reply      
I worked with LXC since 2009, then personally built a cloud provider agnostic workflow interface superior in scope to Docker in feature set[1] between about 2013-2014 as a side project to assist with my work (managing multi-DC, multi-jurisdiction, high security and availability infrastructure and CI/CD for a major cryptocurrency exchange). (Unfortunately I was not able to release that code because my employer wanted to keep it closed source, but the documentation[2] and early conception[3] has been online since early days.) I was also an early stage contributor to docker, providing security related issues and resolutions based upon my early LXC experience.

Based upon the above experience, I firmly believe that Docker could be rewritten by a small team of programmers (~1-3) within a few month timeframe.

[1] Docker has grown to add some of this now, but back then had none of it: multiple infrastructure providers (physical bare metal, external cloud providers, own cloud/cluster), normalized CI/CD workflow, pluggable FS layers (eg. use ZFS or LVM2 snapshots instead of AUFS - most development was done on ZFS), inter-service functional dependency, guaranteed-repeatable platform and service package builds (network fetches during package build process are cached)...

[2] http://stani.sh/walter/cims/

[3] http://stani.sh/walter/pfcts/

jdoliner 2 days ago 2 replies      
There's a couple of things in this article that I don't think are true. I don't think Ben Golub was a co-founder of Docker. Maybe he counts as a co-founder of Docker but not of Dotcloud? That seems a bit weird though. I also am pretty sure Docker's headquarters are in San Francisco, not Palo Alto.
StanislavPetrov 1 day ago 0 replies      
As someone who witnessed the 2000 tech bubble pop, I feel like Bill Murray in Groundhog's day, except unfortunately this time its not just tech. Its going to end very badly.
frigen 1 day ago 1 reply      
Unikernels are a much better solution to the problems that Docker solves.
slim 2 days ago 2 replies      
Docker is funded by In-Q-Tel
elsonrodriguez 1 day ago 0 replies      
That's a lot of money for a static compiler.
jaequery 2 days ago 2 replies      
why are they called Software Maker?
Try Out Rust IDE Support in Visual Studio Code rust-lang.org
262 points by Rusky  2 days ago   81 comments top 12
modeless 2 days ago 5 replies      
I have been using this for a few weeks, as a newcomer to Rust. Although it has some issues, I would not try to develop Rust code without it. It is incredibly useful and works well enough for day-to-day use.

Some of the issues I've found:

* Code completion sometimes simply fails to work. For example, inside future handlers (probably because this involves a lot of type inference).

* When errors are detected only the first line of the compiler error message is accessible in the UI, often omitting critical information and making it impossible to diagnose the problem.

* It is often necessary to manually restart RLS, for example when your cargo.toml changes. It can take a very long time to restart if things need to be recompiled and there isn't much in the way of progress indication.

* This is more of a missing feature, but type inference is a huge part of Rust, and it's often difficult to know what type the type inference engine has chosen for parts of your code. There's no way to find out using RLS in VSCode that I've seen, or go to the definition of inferred types, etc.

Other issues I've seen as a newcomer to Rust:

* It's very easy to get multiple versions of the same dependency in your project by accident (when your dependencies have dependencies), and there is no compiler warning when you mix up traits coming from different versions of a crate. You just get impossible-seeming errors.

* Compiler speed is a big problem.

* The derive syntax is super clunky for such an essential part of the language. I think Rust guides should emphasize derive more, as it's unclear at first how essential it really is. Almost all of my types derive multiple traits.

* In general, Rust is hard. It requires a lot more thinking than e.g. Python or even C. As a result, my forward progress is much slower. The problems I have while coding in Rust don't even exist in other languages. I'm sure this will improve over time but I'm not sure it will ever get to the point where I feel as productive as I do in other languages.

sushisource 2 days ago 1 reply      
Rust is not only a fantastic language, but the level of community involvement from the devs is just completely unlike any other language I've seen in a very long time. That really makes me excited that it will be adopted in the industry over time and ideally replace some of the nightmare-level C++ code out there.
KenoFischer 2 days ago 2 replies      
I haven't had the chance to try the Rust language mode, but I've been using VS Code for all my julia development lately, and I'm pretty impressed. It's quite a nice editor. I avoided using it for a very long time because I thought it'd match Atom's slowness due to their shared Electron heritage. But for some reason VS Code feels a lot snappier. Not quite Sublime levels, but perfectly usable.
int_19h 2 days ago 2 replies      
It looks like code completion is extremely basic. I tried this:

 struct Point { x: i32, y: i32 } fn main() { println!("Hello, world!"); let pt = Point { x: 1, y: 2 }; println!("{} {}", pt.x, pt.y); let v = vec![ pt ]; let vpt = &v[0]; println!("{} {}", vpt.x, vpt.y); }
And I can't get any dot-completions on vpt (but I can on pt). Which is also kinda weird, because if I hover over vpt, it does know that it is a &Point...

Even more weird is that if I add a type declaration to "let vpt" (specifying the same type that would be inferred), then completion works.

That sounds like a really basic scenario... I mean, type inference for locals is pervasive in Rust.

ericfrederich 2 days ago 4 replies      
CSDude 2 days ago 1 reply      
I tried it, it works really nice. If you are looking for an alternative, the Intellij IDEA works very well when you install Rust plugin.
alkonaut 1 day ago 0 replies      
The vscode rust support is progressing nicely and if vscode is already your go-to editor it's the obvious choice.

However if you just want to dip your toes and get going with rust with minimal fuss, I find IntelliJ community+Rust to be the best combo. Vscode+Rust is not as polished yet.

RussianCow 2 days ago 2 replies      
Does this support macro expansion of any kind? I'm currently using the plugin for IntelliJ IDEA, and it works really well aside from completely lacking support for macros, which makes its type annotations and similar features nearly useless for the project I'm working on.
rl3 2 days ago 1 reply      
Anyone here writing Rust on Windows and using WSL (Windows Subsystem for Linux) in your workflow?

I've found using WSL's "bash -c" from my Rust project's working directory in Windows to be a rather elegant way to compile and run code for a Linux target.

Theoretically it should be possible to remote debug Linux binaries in WSL from an editor in Windows, but I haven't had time to explore this yet. Both GDB and LLDB have remote debugging functionality.

demarq 2 days ago 1 reply      
Autofix doesn't seem to trigger for me, would someone confirm it's not just me?

try this in your editor

 let x = 4; if x = 5 {}
it should figure out you wanted x == 5.

tomc1985 2 days ago 1 reply      
I would love Rust support in Visual Studio proper...
Dowwie 2 days ago 1 reply      
Is anyone working on support in Atom?
Towards a JavaScript Binary AST yoric.github.io
306 points by Yoric  2 days ago   201 comments top 32
nfriedly 2 days ago 6 replies      
To clarify how this is not related to WebAssembly, this is for code written in JavaScript, while WASM is for code written in other languages.

It's a fairly simple optimization - it's still JavaScript, just compressed and somewhat pre-parsed.

WASM doesn't currently have built-in garbage collection, so to use it to compress/speed up/whatever JavaScript, you would have to compile an entire JavaScript Virtual Machine into WASM, which is almost certainly going to be slower than just running regular JavaScript in the browser's built-in JS engine.

(This is true for the time being, anyway. WASM should eventually support GC at which point it might make sense to compile JS to WASM in some cases.)

cabaalis 2 days ago 12 replies      
So, compiled Javascript then? "We meet again, at last. The circle is now complete."

The more I see interpreted languages being compiled for speed purposes, and compiled languages being interpreted for ease-of-use purposes, desktop applications becoming subscription web applications (remember mainframe programs? ), and then web applications becoming desktop applications (electron) the more I realize that computing is closer to clothing fads than anything else. Can't wait to pickup some bellbottoms at my local target.

apaprocki 2 days ago 3 replies      
From an alternate "not the web" viewpoint, I am interested in this because we have a desktop application that bootstraps a lot of JS for each view inside the application. There is a non-insignificant chunk of this time spent in parsing and the existing methods that engines expose (V8 in this case) for snapshotting / caching are not ideal. Given the initial reported gains, this could significantly ratchet down the parsing portion of perceived load time and provide a nice boost for such desktop apps. When presented at TC39, many wanted to see a bit more robust / scientific benchmarks to show that the gains were really there.
le-mark 2 days ago 3 replies      
Here's some perspective for where this project is coming from:

> So, a joint team from Mozilla and Facebook decided to get started working on a novel mechanism that we believe can dramatically improve the speed at which an application can start executing its JavaScript: the Binary AST.

I really like the organization of the present article, the author really answered all the questions I had, in an orderly manner. I'll use this format as a template for my own writing. Thanks!

Personally, I don't see the appeal for such a thing, and seems unlikely all browsers would implement it. It will be interesting to see how it works out.

mannschott 2 days ago 1 reply      
This is reminiscent of the technique used by some versions of ETH Oberon to generate native code on module loading from a compressed encoding of the parse tree. Michael Franz described the technique as "Semantic-Dictionary Encoding":

SDE is a dense representation. It encodes syntactically correct source program by a succession of indices into a semantic dictionary, which in turn contains the information necessary for generating native code. The dictionary itself is not part of the SDE representation, but is constructed dynamically during the translation of a source program to SDE form, and reconstructed before (or during) the decoding process. This method bears some resemblance to commonly used data compression schemes.

See also "Code-Generation On-the-Fly: A Key to Portable Software" https://pdfs.semanticscholar.org/6acf/85e7e8eab7c9089ca1ff24...

This same technique also was used by JUICE, a short-lived browser plugin for running software written in Oberon in a browser. It was presented as an alternative to Java byte code that was both more compact and easier to generate reasonable native code for.


I seem to recall that the particular implementation was quite tied to the intermediate representation of the OP2 family of Oberon compilers making backward compatibility in the face of changes to the compiler challenging and I recall a conversation with someone hacking on Oberon that indicated that he'd chosen to address (trans)portable code by the simple expedient of just compressing the source and shipping that across the wire as the Oberon compiler was very fast even when just compiling from source.

I'm guessing the hard parts are:(0) Support in enough browsers to make it worth using this format.(1) Coming up with a binary format that's actually significantly faster to parse than plain text. (SDE managed this.)(2) Designing the format to not be brittle in the face of change.

onion2k 2 days ago 2 replies      
This is a really interesting project from a browser technology point of view. It makes me wonder how much code you'd need to be deploying to for this to be useful in a production environment. Admittedly I don't make particularly big applications but I've yet to see parsing the JS code as a problem, even when there's 20MB of libraries included.
nine_k 2 days ago 1 reply      
This is what BASIC interpreters on 8-bit systems did from the very beginning. Some BASIC interpreters did not even allow you to type the keywords. Storing a trivially serialized binary form of the source code is a painfully obvious way to reduce RAM usage and improve execution speed. You can also trivially produce the human-readable source back.

It's of course not compilation (though parsing is the first thing a compiler would do, too). It's not generation of machine code, or VM bytecode. it's mere compression.

This is great news because you got to see the source if you want, likely nicely formatted. You can also get rid of the minifiers, and thus likely see reasonable variable names in the debugger.

ryanong 2 days ago 2 replies      
This is some amazing progress, but reading this and hearing how difficult JavaScript is as a language to design around makes me wonder how many hours have we spent optimizing a language designed in 2 weeks and living with those consequences. I wish we could version our JavaScript within a tag somehow so we could slowly deprecate code. I guess that would mean though browsers would have to support two languages that would suck..... this really is unfortunately the path of least resistance.

(I understand I could use elm, cjs, emscriptem or any other transpirer but I was thinking of ours spent around improving the js vm.

iainmerrick 2 days ago 1 reply      
This article says "Wouldnt it be nice if we could just make the parser faster? Unfortunately, while JS parsers have improved considerably, we are long past the point of diminishing returns."

I'm gobsmacked that parsing is such a major part of the JS startup time, compared to compiling and optimizing the code. Parsing isn't slow! Or at least it shouldn't be. How many MBs of Javascript is Facebook shipping?

Does anyone have a link to some measurements? Time spent parsing versus compilation?

vvanders 2 days ago 2 replies      
Lua has something very similar(bytecode vs AST) via luac for a long while now. We've used to to speed up parse times in the past and it helps a ton in that area.
nikita2206 2 days ago 0 replies      
In this thread: people not understanding the difference between byte code (representing code in the form of instructions) and AST.
s3th 2 days ago 3 replies      
i'm very skeptical about the benefits of a binary JavaScript AST. The claim is that a binary AST would save on JS parsing costs. however, JS parse time is not just tokenization. For many large apps, the bottleneck in parsing is instead in actually validating that the JS code is well-formed and does not contain early errors. The binary AST format proposes to skip this step [0] which is equivalent to wrapping function bodies with eval This would be a major semantic change to the language that should be decoupled from anything related to a binary format. So IMO proposal conflates tokenization with changing early error semantics. Im skeptical the former has any benefits and the later should be considered on its own terms.

Also, theres immense value in text formats over binary formats in general, especially for open, extendable web standards. Text formats are more easily extendable as the language evolves because they typically have some amount of redundancy built in. The W3C outlines the value here (https://www.w3.org/People/Bos/DesignGuide/implementability.h...). JS text format in general also means engines/interpreters/browsers are simpler to implement and therefore that JS code has better longevity.

Finally, although WebAssembly is a different beast and a different language, it provides an escape hatch for large apps (e.g. Facebook) to go to extreme lengths in the name of speed. We dont need complicate JavaScript with such a powerful mechanism already tuned to perfectly complement it.

[0]: https://github.com/syg/ecmascript-binary-ast/#-2-early-error...

d--b 2 days ago 3 replies      
I am puzzled by how an binary AST makes the code significantly smaller than a minified+gziped version.

A JavaScript expression such as:

var mystuff = blah + 45

Gets minified asvar a=b+45

And then what is costly in there is the "var " and character overhead which you'd hope would be much reduced by compression.

The AST would replace the keywords by binary tokens, but then would still contain function names and so on.

I mean I appreciate the effort that shipping an AST will cut an awful lot of parsing, but I don't understand why it would make such a difference in size.

Can someone comment?

kyle-rb 2 days ago 1 reply      
The linked article somehow avoids ever stating the meaning of the acronym, and I had to Google it myself, so I imagine some other people might not know: AST stands for "abstract syntax tree".


svat 2 days ago 0 replies      
However this technology pans out, thank for a really well-written post. It is a model of clarity.

(And yet many people seem to have misunderstood: perhaps an example or a caricature of the binary representation might have helped make it concrete, though then there is the danger that people will start commenting about the quality of the example.)

mnarayan01 2 days ago 1 reply      
For those curious about how this would deal with Function.prototype.toSource, via https://github.com/syg/ecmascript-binary-ast#functionprototy...:

> This method would return something like "[sourceless code]".

Existenceblinks 2 days ago 0 replies      
These are random thought I just wrote on twitter in the morning(UTC+7):

"I kinda think that there were no front-end languages actually. It's kinda all about web platform & browsers can't do things out of the box."

"Graphic interface shouldn't execute program on its own rather than rendering string on _platform_ which won't bother more."

"This is partly why people delegate js rendering to server. At the end of the day all script should be just WebAssembly bytecodes sent down."

"Browser should act as physical rendering object like pure monitor screen. User shouldn't have to inspect photon or write photon generators."

"SPA or PWA is just that about network request reduction, and how much string wanted to send at a time & today http/2 can help that a lot."

"Project like Drab https://github.com/grych/drab 's been doing quite well to move computation back to "server" (opposite to self-service client)"

"WebAssembly compromise (to complement js) to implement the platform. JS api and WebAssembly should be merged or united."

"VirtualDom as if it is a right way should be built-in just like DOM get constructed from html _string_ from server. All JS works must die."

"That's how WebComponent went almost a half way of fulfilling web platform. It is unfortunate js has gone far, tools are actively building on"

"I'd end this now before some thought of individualism-ruining-the-platform take over. That's not gonna be something i'd like to write (now)"


Not a complete version though. Kind of general speaking but I've been thinking in detail a bit. Then hours later I found this thread.

c-smile 2 days ago 2 replies      
To be honest I (as an author of the Sciter [1]) do not expect too much gain from that.

Sciter contains source code to bytecode compiler. Those bytecodes can be stored to files and loaded bypassing compilation phase. There is not too much gain as JS alike grammar is pretty simple.

In principle original ECMA-262 grammar was so simple that you can parse it without need of AST - direct parser with one symbol lookahead that produces bytecodes is quite adequate.

JavaScript use cases require fast compilation anyway. As for source files as for eval() and alike cases like onclick="..." in markup.

[1] https://sciter.comAnd JS parsers used to be damn fast indeed, until introduction of arrow functions. Their syntax is what requires AST.

TazeTSchnitzel 2 days ago 0 replies      
It's really exciting that this would mean smaller files that parse faster, but also more readable!
iamleppert 2 days ago 3 replies      
I'd like to see some real-world performance numbers when compared with gzip. The article is a little overzealous in its claims that simply don't add up.

My suspicion is it's going to be marginal and not worth the added complexity for what essential is a compression technique.

This project is a prime example of incorrect optimization. Developers should be focused on loading the correct amount of JavaScript that's needed by their application, not on trying to optimize their fat JavaScript bundles. It's so lazy engineering.

mnemotronic 1 day ago 1 reply      
Yea! A whole new attack surface. A hacked AST file could cause memory corruption and other faults in the browser-side binary expander.
kevinb7 2 days ago 1 reply      
Does anyone know the actual spec for this binary AST can be found? In particular I'm curious about the format of each node type.
z3t4 2 days ago 0 replies      
I wish for something like evalUrl() to run code that has already been parsed "in the background" so a module loader can be implemented in userland. It would be great if scripts that are prefetched or http2 pushed could be parsed in parallel and not have to be reparsed when running eval.
malts 2 days ago 1 reply      
Yoric - the Binary AST size comparisons in the blog - was the original javascript already minified?
limeblack 2 days ago 1 reply      
Could the AST be made an extension of the language similar to how it works in Mathematica?
bigato 2 days ago 0 replies      
Trying to catch up with webassembly, huh?
jlebrech 2 days ago 1 reply      
with an AST you can visualise code in ways other than text, and also reformat code like in go-fmt.
megamindbrian 2 days ago 1 reply      
Can you work on webpack instead?
tolmasky 2 days ago 3 replies      
One of my main concerns with this proposal, is the increasing complexity of what was once a very accessible web platform. You have this ever increasing tooling knowledge you need to develop, and with something like this it would certainly increase as "fast JS" would require you to know what a compiler is. Sure, a good counterpoint is that it may be incremental knowledge you can pick up, but I still think a no-work make everything faster solution would be better.

I believe there exists such a no-work alternative to the first-run problem, which I attempted to explain on Twitter, but its not really the greatest platform to do so, so I'll attempt to do so again here. Basically, given a script tag:

 <script src = "abc.com/script.js" integrity="sha256-123"></script>
A browser, such as Chrome, would kick off two requests, one to abc.com/script.js, and another to cdn.chrome.com/sha256-123/abc.com/script.js. The second request is for a pre-compiled and cached version of the script (the binary ast). If it doesn't exist yet, the cdn itself will download it, compile it, and cache it. For everyone except the first person to ever load this script, the second request returns before the time it takes for the first to finish + parse. Basically, the FIRST person to ever see this script online, takes the hit for everyone, since it alerts the "compile server" of its existence, afterwards its cached forever and fast for every other visitor on the web (that uses chrome). (I have later expanded on this to have interesting security additions as well -- there's a way this can be done such that the browser does the first compile and saves an encrypted version on the chrome cdn, such that google never sees the initial script and only people with access to the initial script can decrypt it). To clarify, this solution addresses the exact same concerns as the binary AST issue. The pros to this approach in my opinion are:

1. No extra work on the side of the developer. All the benefits described in the above article are just free without any new tooling.

2. It might actually be FASTER than the above example, since cdn.chrome.com may be way faster than wherever the user is hosting their binary AST.

3. The cdn can initially use the same sort of binary AST as the "compile result", but this gives the browser flexibility to do a full compile to JIT code instead, allowing different browsers to test different levels of compiles to cache globally.

4. This would be an excellent way to generate lots of data before deciding to create another public facing technology people have to learn - real world results have proven to be hard to predict in JS performance.

5. Much less complex to do things like dynamically assembling scripts (like for dynamic loading of SPA pages) - since the user doesn't also have to put a binary ast compiler in their pipeline: you get binary-ification for free.

The main con is that it makes browser development even harder to break into, since if this is done right it would be a large competitive advantage and requires a browser vendor to now host a cdn essentially. I don't think this is that big a deal given how hard it already is to get a new browser out there, and the advantages from getting browsers to compete on compile targets makes up for it in my opinion.

agumonkey 2 days ago 0 replies      
hehe, reminds me of emacs byte-compilation..
Laaas 2 days ago 0 replies      
Why does this guy use bits instead of bytes everywhere?
FrancoisBosun 2 days ago 1 reply      
I feel like this may become some kind of reimplementation of Java's byte code. We already have a "write once, run anywhere" system. Good luck!
Ask HN: What mistakes in your experience does management keep making?
443 points by oggyfredcake  4 days ago   373 comments top 15
Boothroid 3 days ago 8 replies      
* Zero career direction and zero technical speciality for devs

* Underestimation of difficulty whether through cynicism (burn the devs) or cluelessness

* Inadequate training and expectation devs can just piggy back learning technology x from scratch whilst writing production software using it

* Trying to use one off contracts as a way of building resellable products

* Insistence that all devs time must be billable and trying to defy gravity in ignoring skills rot etc. through lack of investment in training

* Expectation that devs can be swapped between technologies without problems

* Swapping people in and out of projects as if this will not affect progress

* Deliberate hoarding of information as a means of disempowering devs

All of this inevitably leads to a bunch of pissed off devs. The ones that are happy to eat it become the golden boys and get promotions. Those that point out the bullshit leave once they can and are replaced with the desperate at the bottom who sooner or later arrive at the same position of wanting to leave once they realise what's going on. I think tech can be pretty miserable if you are not in the upper echelon of lucky types that can score a position at a Google, Facebook etc.

Oh and a couple more:

* Give no feedback unless things go wrong

* Treat your highly educated, intelligent and motivated devs like children by misusing agile in order to micromanage them

jerf 3 days ago 6 replies      
I'll add one that even after 200 comments I don't see: Failure to explain the reason why. Coming down to their developer with a list of tasks without explaining why those tasks are the most important and will lead to company success.

You might think startups are small enough that this couldn't happen but that was actually where my worst experience was. The founders are visibly in a meeting with a couple people, maybe "suits", maybe not. They come out of the meeting and the next day your priorities are rewritten. Cool beans, that's a thing that can happen and that's not my issue. My issue is, why? What are the goals we are trying to hit now? What's the plan? Why is that better than the old plan?

This is especially important IMHO for more senior engineers responsible for architecture and stuff, because those matters can greatly affect the architecture. Telling me why lets me start getting a grasp on what parts of the code are long term and what can be considered a short term hack, what the scaling levels I need to shoot for, and all sorts of other things that are very hard to determine if you just come to me with "And actually, our customers need a new widget to frozzle the frobazz now more than they need to dopple the dipple now."

Not necessarily the biggest issue, there's a lot of other suggestions here that are probably bigger in most places, but this is one that has frustrated me.

(I'll also say this is one you may be able to help fix yourself, simply by asking. If you are in that senior role I think you pretty much have a professional obligation to ask, and I would not be shy about working that into the conversation one way or another.)

muzani 3 days ago 4 replies      
* Killing things that are low profit margins, under some misguided Pareto Principle approach. Sometimes these things are loss leaders designed to pull customers for other products.

* Spending too much on marketing/sales before people want the product. They usually just end up burning their brand if the product is too low quality.

* Too much focus on building multiple small features rather than focusing on the value proposition.

* Trying to negotiate deadlines for product development. "We don't have two months to finish this. Let's do this in one." In software estimation, there's the estimate, the target, and the commitment. If the commitment and estimate are far off, it should be questioned why, not negotiated.

* Hiring two mediocre developers at half the salary of one good developer. They usually can't solve problems past a certain treshhold.

* Importing tech talent, rather than promoting. Usually the people who have built the product have a better understanding of the tech stack than someone else they import.

* Startups that rely on low quality people to skimp on the budget. These people later form the DNA of the company and make it difficult to improve, if they're not the type who improve themselves.

stickfigure 3 days ago 25 replies      
I've never met a manager that wouldn't rather pay four average people $100/hr to solve a problem that one smart person could solve in half the time for $400/hr.

There seems to be some sort of quasi-religious belief in the fundamental averageness of humans; consequently the difference between developer salaries at any company varies by maybe 50%, whereas the productivity varies by at least a full order of magnitude.

Until "management" realizes this, the only way that a developer on the upper end of the productivity scale can capture their value is to found their own company. I sometimes wonder what would happen if some company simply offered to pay 3X the market rate and mercilessly filter the results.

JamesLeonis 3 days ago 3 replies      
Want to jump ahead a few years from Mythical Man-Month? Let me recommend Peopleware by Tom DeMarco and Tim Lister.[2] It's painful that we haven't crawled far out of the 80s practices.

The first chapter says: "The major problems of our work are not so much technological as sociological in nature." Sorry Google Memo Dude. DeMarco and Lister called it in the 80s.

Speaking of DeMarco, he also wrote a book about controlling software projects before Peopleware. Then in 2009 he denounced it. [1]

 To understand controls real role, you need to distinguish between two drastically different kinds of projects: * Project A will eventually cost about a million dollars and produce value of around $1.1 million. * Project B will eventually cost about a million dollars and produce value of more than $50 million. Whats immediately apparent is that control is really important for Project A but almost not at all important for Project B. This leads us to the odd conclusion that strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely youre working on a project thats striving to deliver something of relatively minor value.
I always think about that when I'm doing a Sprint Review.

[1]: https://www.computer.org/cms/Computer.org/ComputingNow/homep...[2]: https://en.wikipedia.org/wiki/Peopleware:_Productive_Project...

lb1lf 3 days ago 6 replies      
Working for a company building heavy hardware, I see the following happen time and time again:

* Reorganizing seemingly for the sake of reorganizing. Result: Every time the new organization has settled somewhat and people know who to interact with to make things flow smoothly, everything is upended and back to square one.

* Trying to make our products buzzword compliant without understanding the consequences - we've on occasion been instructed to incorporate technologies which are hardly fit for purpose simply because 'everyone else is doing it' (Where 'everyone' is the companies featured in whatever magazine the CEO leafed through on his latest flight. Yes, I exaggerate a bit for effect.)

* Misguided cost savings; most of what hardware we use, we buy in small quantities - say, a few hundred items a year, maximum. Yet purchasing are constantly measured on whether they are able to source an 'equivalent' product at a lower price. Hence, we may find ourselves with a $20,000 unit being replaced by a $19,995 one - order quantity, 5/year - and spend $10,000 on engineering hours to update templates, redo interfaces &c.

* Assuming a man is a man is a man and that anyone is easily and quickly replaceable (except management, of course) - and not taking the time and productivity loss associated with training new colleagues into account.

Edit: An E-mail just landed in my inbox reminding me of another:

* Trying to quantify anything and everything, one focuses on the metrics which are easy to measure, rather than the ones which matter. As a result, the organization adapts and focuses on the metrics being measured, not the ones which matter - with foreseeable consequences for productivity.

ChuckMcM 3 days ago 4 replies      
There are some very common ones;

* Building a one more generation of product than the market supports (so you build a new version when the market has moved on to something new).

* Rewarding productivity over quality.

* Managing to a second order effect. For example when Nestle' bought Dryers they managed to 'most profit per gallon' which rewarded people who substituted inferior (and cheaper) components, that lead to lower overall sales and that leads to lower overall revenue. Had they managed to overall revenue they might have caught the decline sooner.

* Creating environments where nobody trusts anyone else and so no one is honest. Leads to people not understanding the reality of a situation until the situation forces the disconnect into the mainstream.

* Rewarding popular popular employees differently than rank and file. Or generally unevenly enforcing or applying standards.

* Tolerating misbehavior out of fear of losing an employee. If I could fire anyone in management who said, "Yeah but if we call them on it they will quit! See what a bind that puts us in?" I believe the world would be a better place.

There are lots of things, that is why there are so many management books :-)

sulam 3 days ago 3 replies      
I have held management and non-management careers in roughly equal proportion over my career. My list would look like this:

1) believing you can dramatically change the performance of an employee -- it's very rare to save someone and less experienced managers always believe they can.

1.5) corollary to the above: not realizing the team is aware and waiting for you to fix the problem and won't thank you for taking longer to do what's necessary.

2) believing that people don't know what you're thinking -- people see you coming a mile off.

3) thinking you can wait to fix a compensation problem until the next comp review -- everyone waits too long on these.

4) believing HR when they tell you that you can't do something that's right for your team -- what they're really saying is that you have to go up the ladder until you find someone who can force them to make an exception.

5) not properly prioritizing the personal/social stuff -- at least this is my personal failing, and why ultimately management has not stuck for me.

6) believing your technical opinion matters -- I've seen way too many VP's making technical decisions that they are too far from the work to make, trust your team!

It'd be fun to see a list of these from the non-management point of view. I'd start off with the inverse of #6 above:

1) believing your technical opinion matters -- the business is what ultimately matters.

tboyd47 3 days ago 5 replies      
Trying to write code alongside their devs.

Here's what happens when a manager tries to fill tickets himself: his sense of control of the project is derived not from relationships of trust and cooperation with his reports, but from direct involvement in the code. So naturally, any challenging or critical piece of code ends up getting written by him (because otherwise, how could he be confident about it?)

The manager is essentially holding two jobs at once so they end up working late or being overly stressed at work.

The devs will feel intimidated to make architecture decisions, because they know if they do something their manager doesn't like, it will get refactored.

They will also feel as if they are only given the "grunt work" as all the challenging work is taken on by their manager.

The code itself is in a constant state of instability because there is a tension between the manager needing the other employees' help to get the code written on time, while also needing to have that complete control and mastery over the code that can only come from writing it yourself. So people's work gets overwritten continually.

This is very bad and it's very common - managers should learn to delegate as that is an essential part of their job. If they can't delegate they should remain as an individual contributor and not move into management.

ideonexus 3 days ago 3 replies      
The biggest recurring issue I have with my managers over the last twenty years is their need to add unnecessary complexity to projects. I think a good manager stays out of the way and just monitors employees for any obstructions that are preventing them from meeting their goals. Yet, my experience is that when a manager sits in on a project meeting, they can't help but start giving input on the project itself, adding complexity to defined business rules or adding obscure use cases to the system. Too many managers can't help but dominate meetings because their dominant personalities is how they became managers in the first place.

The worst is when you get two or more managers attending the same meeting. Then nothing will get done as they eat up all of the meeting time arguing about business rules, magnifying the complexity of the system until you end up with some Rube Goldberg chain of logic that they will completely forget minutes after they've left the meeting. A good manager knows to trust their employees and only intervenes to make sure those employees have the resources they need to do their jobs. The most effective managers are humble and respect the expertise of the experts they hire.

alexandercrohde 3 days ago 2 replies      
- Trying to "create a buzz" around the office, asking for a "sense of urgency," and other things that result in an illusion of productivity.

- Focusing on fixing problems, rather than preventing problems

- Acting as yes-men to bad upper-management strategy, thereby creating a layer of indirection between the people who think it's a good plan vs the engineers who can explain why it's not quite that easy

- Trying to use software tools (e.g. Jira's burndown charts) to quantitatively/"objectively" measure engineers

mychael 3 days ago 0 replies      
A few patterns I've seen:

* Preaching about the virtues of a flat organizational structure, but making unilateral decisions.

* Hiring people for a particular challenging job, but have them work on menial unchallenging tasks.

* Creating multiple layers of management for a tiny team.

* Facilitating post mortems that would be better facilitated by a neutral third party.

* Using vague management speak as a deliberate strategy to never be held responsible for anything.

* Rewarding politics with promotions.

* Marginalizing experienced employees.

* Talking too much about culture.

* Trying to be the company thought leader instead of helping people do their best work.

* Assuming that everyone underneath you views you as a career mentor.

* Negging employees.

* New hire managers: Firing incumbent employees after youve only been on the job for a few weeks.

* New hire managers: Not doing 1:1s with everyone who reports to you.

* New hire managers: Create sweeping changes like re-orgs after a few weeks on the job.

* New hire managers: Doing things a certain way because it worked well at a previous company.

* New hire managers: Changing office work hours to suit your personal life.

greenyoda 3 days ago 2 replies      
Promoting technical people with no management experience into management jobs, without providing them with any training or guidance. (Happened to me.) Writing code and managing people require very different sets of skills, and just because you're good at the former doesn't necessarily mean you'll be any good at the latter (or that you'll enjoy doing it).

(Similar problems can happen when a bunch of people with no management skills decide to found a company and start hiring people.)

redleggedfrog 3 days ago 1 reply      
The worst mistake I've seen management make over 20 years of software development is not listening to the technical people.

Estimates get shortened. Technical decisions are overruled for business or political reason. Warnings about undesirable outcomes are ignored. Sheer impossibility deemed surmountable.

I feel this is the worst mistake by management because the technical people are the ones who suffer for it. Overtime, inferior software, frustration, technical debt, lack of quality, are all things management doesn't really care about because they can always just push people harder to get what they want.

cbanek 3 days ago 2 replies      
Overly optimistic schedules. Even with a known gelled team, being constantly overscheduled is a nightmare. You cut corners, and are always stressed and tired. Other teams that believe the optimistic schedules may become angry or blocked on you. Over time this just leads to burnout, but since nobody seems to stay anywhere for very long, nobody seems to care.
Laverna A Markdown note-taking app focused on privacy laverna.cc
305 points by mcone  23 hours ago   165 comments top 50
edanm 18 hours ago 6 replies      
I'd really love a good Evernote alternative, but the one feature that tends not to exist is full page bookmarking / web clipping. I want to be able to clip a full page easily into the program, which will also save a copy of whatever article I happen to be reading. I really wouldn't mind (and would even love) to roll my own notes system with vim/etc. But without full page clipping, it would be a problem.

Another good thing about Evernote is the easy ability to mix in images, documents, and text.

The reasons I want to leave Evernote, btw, is:

1. I worry about their future and would rather a more open solution.

2. Their software, at least on Mac, really, really sucks. It's slow, and has tons of incredibly ridiculos bugs that have been open for a long time. E.g. when typing in a tag, if there's a dash, it will cause a problem with the autocompletion. For someone who uses the tags a lot and has a whole system based on them, having dashes cause a problem is a big deal, and the fact that it hasn't been fixed in ~ a year makes me really question their priorities.

yborg 19 hours ago 3 replies      
Apart from having sync capability (via Dropbox) this in almost no way shape or form replicates the current capabilities of Evernote. A more accurate title would be "Laverna: An open source note-taking application." This of course will not generate many clicks, since there are dozens of things like this, many of them better-looking and more mature.
zachlatta 21 hours ago 16 replies      
I've given up on using any sort of branded app for notetaking. At best it's open source and the maintainers will lose interest in a few years.

When you write things down, you're investing in your future. It's silly to use software that isn't making that same investment.

After trying Evernote, wikis, org-mode, and essentially everything else I could find, I gave up and tried building my own system for notes. Plain timestamped markdown files linked together. Edited with vim and a few bash scripts, rendered with a custom deployment of Gollum. All in a git repo.

It's... wonderful. Surprisingly easy. Fast. If there's a feature I wish it had, I can write a quick bash script to implement it. If Gollum stops being maintained, I can use whatever the next best markdown renderer is. Markdown isn't going away anytime soon.

It's liberating to be in control. I find myself more eager to write things down. I'm surprised more people don't do the same.

Edit: here's what my system looks like https://imgur.com/a/nGplj

trampi 17 hours ago 1 reply      
Just FYI, more than one year has passed since the last release. The commit frequency has declined significantly. I use it, but I am not sure if I would recommend it in its current state. It does it's job and I like it, but the future is uncertain.
omarish 13 hours ago 0 replies      
The encryption seems very insecure. I just tried turning on encryption and it revealed my password in the URL bar. And now each time I click on a new page, it shows my password in the URL bar.


mikerathbun 20 hours ago 2 replies      
I am constantly looking for a good notes app. I have been a paying Evernote user for years and I really like it. The only problem is the formatting. I take a lot of pride in formatting my notes and like it to look a certain way depending on the content. Markdown is definitely the way I want to go which Evernote has promised in the past but still hasn't delivered. That said note of the buttons on Laverna seem to work on my Mac. Can't sign into DropBox and can't create a notebook. Oh well.
itaysk 19 hours ago 5 replies      
There are so many note taking apps and yet I still can't find one I like.My requirements are simple:

- Markdown- cross platform with sync- tags

I have settled on SimpleNote for now, but I'm not completely happy. It's mac app is low quality and doesn't have markdown, It's open source but they ignore most of the issues.Bear Notes looks cool but wasn't cross platform.

I am still looking. If this thing had phone apps (I'm on iPhone) I'd give it a go.

bharani_m 16 hours ago 1 reply      
I run a minimal alternative to Evernote called EmailThis [1].

You can add the bookmarklet or browser extension. It will let you save complete articles and webpages to your email inbox. If it cannot extract useful text, EmailThis will save the page as a PDF and send it as an attachment.

No need to install apps or login to other 3rd party services.

[1] https://www.emailthis.me

mgiannopoulos 20 hours ago 0 replies      
This came up on Product Hunt today as well >> Turtl lets you take notes, bookmark websites, and store documents for sensitive projects. From sharing passwords with your coworkers to tracking research on an article you're writing, Turtl keeps it all safe from everyone but you and those you share with. <https://turtlapp.com/download/
trextrex 14 hours ago 0 replies      
Last I checked Laverna, they had really serious issues with losing data after every update or so. I stopped using it after encountering one of these. Looks like a lot of these issues are still open:







Edit: Formatting

ernsheong 20 hours ago 2 replies      
It doesn't do web clippings though.

Incidentally, I am building https://pagedash.com to clip web pages more accurately, exactly as you saw it (via a browser extension)! Hope this helps someone.

twodave 7 hours ago 0 replies      
I tend to use Workflowy.com for anything hierarchical/simple/listy and then Trello for anything bigger.

For instance, recently did some CTO interview screenings via phone. It was really easy to set up a Trello board with a card per candidate, drop them in the list matching their current position in the pipeline, attach a resume, recruiter notes, due dates etc. The interview itself I threw as a bulleted list into Workflowy and just crossed things off as they were covered. Took notes in notepad and uploaded to the Trello board at the end. Invited stake holders to view the board and sent out a daily email with progress. Interviewed 8 candidates this way in a total of about 10 hours, including all the time spent prepping and scoring and communicating with the hiring team.

yeasayer 21 hours ago 2 replies      
One of the biggest use cases of Evernote for me OCR notes with search. All my important checks, slips and papers are going there. It's seems that Laverna doesn't have this feature. So it's not an alternative for me.
macawfish 15 hours ago 1 reply      
For notes, I use a text editor and Resilio Sync/Syncthing.

It's great!

scribu 21 hours ago 1 reply      
Would be interesting to do a comparison with Standard Notes, which seems to offer the same features.
kepano 12 hours ago 0 replies      
Recently went through the process of evaluating every note taking tool I could find. Settled on TiddlyWiki which is slightly unintuitive at first but very well thought out once you get it customized to your needs. Fulfills most of the needs I see people requesting on this thread, i.e. flat file storage, syncable via Dropbox, markdown support, wiki structure.
devinmcgloin 19 hours ago 0 replies      
I've been using Notion (https://www.notion.so) for a while and have nothing but good things to say.

- It's incredibly flexible. You can model Trello Task Boards in the same interface as writing or making reference notes.- They've got a great desktop client and everything syncs offline.- Latex Support- Programmable Templates- Plus there seems to be pretty neat people behind it

I switched to it 8 months ago or so and haven't really looked back.

barking 16 hours ago 0 replies      
What are the main concerns people have about using evernote, data protection, the company going out of business, the code being closed and proprietary? I can understand all those but sometimes it also feels like everyone (me included) expects every software to be free now.

I have a free evernote account and don't use it very much but I find it handy for some things such as cooking recipes and walking maps. I think it would also be great for Dave Allen's GTD technique if I could ever be disciplined enough.

If evernote removed the free tier I think I would pay up, the pricing for the personal plans is very reasonable. I'd probably make more use of it too. Humans don't tend to value free stuff.For someone like me I think they'd have had a better chance of turning me into a paying customer if their model was an initial free period followed by having to pay up.But I will never pay up if I can get away with paying nothing.

LiweiZ 9 hours ago 0 replies      
Notes are data. We need ways to input and store it fully under user's control. And we need a much better way to get insight from our own notes.
dade_ 14 hours ago 2 replies      
I recently tried it again, Laverna is very buggy and I just received an email from dropbox noting that the api they used is being deprecated. The app isn't really native, just a chromium window running a local web app.

So if it needs to be mobile, I am using onenote, but have to use the web app in Linux, and search is useless on the web app. So for desktop only, I use Zim. Cross platform, lots of plugins, stores everything in a file system with markdown. I haven't been able to get SVG to render in the notes though, which would be awesome, then I could just edit my diagrams and pictures with Inkscape. I can read the notes on mobile devices as they are just in markdown, but a mobile app really is needed.

jasikpark 10 hours ago 0 replies      
A ridiculously simple, but good notes app I've found is https://standardnotes.org
ziotom78 18 hours ago 0 replies      
I used to use org-mode to take down notes when I attended seminars or meetings (I'm an astrophysicist). However, a feature I missed was the ability to quickly take photos to insert into my notes, in order to capture slides or calculations/diagrams done on the blackboard.

Thus, last year I subscribed to Evernote (which provides both features), and I must say that I am extremely satisfied. Moreover, Evernote's integration with Firefox and Android allows me to quickly save web pages for later reading (this might be possible with org-mode, but not as handy as with Evernote, which just require one tap.)

I think that Laverna is interesting for users like me: it provides a web app with a nice interface, it implements the first feature I need (easy photo taking), and if really an Android app is on the way, integration with Android services might allow to save web pages is Laverna using one tap like Evernote.

tandav 20 hours ago 0 replies      
I use plain .md files in a github "Notes" repo.I even don't render it, just using Material Theme for sublime text.


bunkydoo 20 hours ago 2 replies      
I'm still using paper over here, nothing seems to do it for me on the computer. Paper is great, and paper is king.
mavci 4 hours ago 0 replies      
I exported my contents and I found my contents in plain text. I think exported contents should be encrypted too.
perilunar 13 hours ago 0 replies      
I gave up on Evernote after experiencing syncing problems. Now I just use the default MacOS and iOS notes.app. Seems kind of boring but it actually works really well, and is nicely minimal. Also its free, pre-installed, no sync problems, and has web access via iCloud when I need it.

But for the love of god, why did they make link colour orange instead of the default blue? And why cant it be changed via preferences? They had one job

tardygrad 21 hours ago 0 replies      
I'm going to give this a go.

Self hosted Dokuwiki has been my note taking tool of choice, usable on multiple devices, easy to backup, easy to export notes but markdown sounds good.

Is it possible to share notes or make notes public?

tomerbd 19 hours ago 1 reply      
I found google keep to be the best for small notes without too much categorization, and google spreadsheet to be the best for larger scoped note taking due to the tabs.
snez 8 hours ago 1 reply      
Like what's wrong with the macOS Notes app?
anta40 18 hours ago 0 replies      
I still use Evernote on my Android phone (Galaxy Note 4), mainly because of handwriting support.

For simplistic notes, well Google Keep is enough.

Still looking for alternatives :)

pacomerh 7 hours ago 0 replies      
Bear notes is free if you don't sync your devices and it supports markdown well. Very clean app.
Skunkleton 7 hours ago 1 reply      
We have had this application for a long time. It is called a text editor or a word processor.
paulsutter 21 hours ago 1 reply      
What I really really want is a tool that keeps notes in github, therefore an open/standard/robust way to do offline, merge changes, resolve conflicts.

I've lost so much data from Evernote's atrocious conflict resolution that it's my central concern. I don't see any mention of that here.

Use case: edit notes on a plane on laptop, edit notes on phone after landing, sometime later use laptop again and zap.

djhworld 13 hours ago 0 replies      
org-mode works well enough for me. It's a bit awkward at first and requires you to remember a lot of key combinations and things, but it does the job.

It doesn't work so well across devices (especially mobile), so I tend to carry around a small notebook, and then when I'm back at my computer I type anything useful that I'd captured in my notebook into org mode.

Sometimes I just take a picture of my notes in my notebook and then use the inlineimages feature to display the image inline, that works pretty well too although there's no OCR.

It seems to work OK.

chairmanwow 20 hours ago 0 replies      
Using the online editor on Android with Firefox is essentially unusable. It feels almost like Laverna is trying to do autocorrect at the same time as my keyboard. Characters appear and disappear as I type which makes for a really confusing UX.
devalnor 15 hours ago 0 replies      
I'm happy with Inkdrop https://www.inkdrop.info/
jusujusu 17 hours ago 0 replies      
Title is making me post this:http://elephant.mine.nu

Cons: no mobile app, no OCR for docs, no web clipper

pacomerh 21 hours ago 0 replies      
I'm very happy with Bear notes. Will give this a shot though.
pookeh 17 hours ago 0 replies      
I have been using Trello. To save a screenshot, I Ctrl+Cmd+Shift+4 the screen, and paste directly into a card. It's fast.
nishs 19 hours ago 0 replies      
The macOS and web application don't look like the screenshot on the landing page. Is there a theme that needs to be configured separately?
ehudla 16 hours ago 0 replies      
The two must haves for me are integration with org mode (as was mentioned in thread) and with Zotero.
5_minutes 18 hours ago 0 replies      
I love Evernote for its ocr capabilities, so I can go paperless. But it seems this is not implemented here.
znpy 21 hours ago 0 replies      
Very cool!

Just wanted to say that the nodes app in nextcloud is very handy too!

Actually, if Nextcloud could embed this Laverna somehow... that would be awesome.

krisives 5 hours ago 0 replies      
Download no thanks
4010dell 18 hours ago 0 replies      
I like it. Better than evernote. evernote was like trying to win a marathon running backwards.
Brajeshwar 20 hours ago 1 reply      
laverna.app cant be opened because it is from an unidentified developer.


nodomain 15 hours ago 0 replies      
Last release 1 year ago... seems dead, right?
lewisl9029 15 hours ago 0 replies      
It's really cool to see another app using remoteStorage for sync! I built Toc Messenger a few years ago on top of remoteStorage for sync as well, and it was a pleasure to work with (https://github.com/lewisl9029/toc, the actual app is no longer functioning since I took down the seed server quite a while ago). Unfortunately, it seems like the technology hasn't gained much traction since I last worked with it. The only 2 hosts listed on their wiki that offer hosted remoteStorage are the same that I saw two years ago: https://wiki.remotestorage.io/Servers

The other alternative sync method offered is Dropbox, and if it's also using the remoteStorage library as the interface as I'm assuming, it would have to depend on their Datastore API, which has been deprecated for more than a year now AFAIK (https://blogs.dropbox.com/developers/2015/04/deprecating-the...). Is that aspect of the app still functional? If anyone knows any other user-provided data storage APIs like Dropbox Datastore or remoteStorage that's more actively developed and supported, I'd love to hear about them.

The concept of apps built on user-provided and user-controlled data-sources, envisioned by projects like remoteStorage and Solid (https://solid.mit.edu/), has always been immensely appealing to me. If users truly controlled their data, and only granted apps access to the data they need to function (instead of depending on each individual app to host user data in their own locked-off silos), then switching to a different app would be a simple matter of granting another app access to the same pieces of data. Lock-in would no longer be a thing!

Imagine that! We could have a healthy and highly competitive app ecosystem where users choose apps by their own merit instead of by the size of their moat built on nothing but network effects. Newcomers could unseat incumbents by simply providing a better product that users want to switch to. Like a true free-market meritocracy!

Sadly, this is a distant dream because both newcomers and incumbents today realize the massive competitive advantage lock-in and network effects afford them. Incumbents will never give up their moat and allow the possibility of interop without a fight, and newcomers all end up racing to build up their own walled-off data silos because they have ambitions to become an incumbent enjoying a moat of their own one day. Even products that are built on top of open protocols and allow non-trivial interop tend to eventually go down the path of embrace, extend, extinguish, once they reach any significant scale.

I'm starting to think strong legislation around data-portability and ownership may be the only way a future like this could stand to exist, but the incumbents of today and their lobbying budgets will never let that happen.

loomer 19 hours ago 0 replies      
>Laverna for android is coming soon

I'd probably start using it right now if it was already available for Android.

rileytg 20 hours ago 0 replies      
while the demo worked well, under the hood looks like a somewhat aging codebase
YouTube admits 'wrong call' over deletion of Syrian war crime videos middleeasteye.net
238 points by jacobr  2 days ago   137 comments top 15
alexandercrohde 2 days ago 7 replies      
I think youtube needs to consider backing off regulating political content.

The fact is politics and morality are inherently intermingled. One can use words like extremist, but sometimes the extremists are the "correct" ones (like our founding fathers who orchestrated a revolution). How could any system consistently categorize "appropriate" videos without making moral judgements?

itaris 2 days ago 7 replies      
I'm much a proponent of automation as anyone else. But I think right now Google is trying to do something way too hard. By looking for "extremist" material, they are basically trying to determine the intention of a video. How can you expect an AI to do that?
molszanski 1 day ago 1 reply      
Let's look at the bigger picture. First, in March some newspapers find an extremist video. It has ~14 views and YT advertising all over it. They make a a big deal out of it. As a result YouTube looses ad clients and tons of money.

Then, as a response, they make an alg. They don't want people to call them a "terrorist platform" ever again. Hence they take down the videos.

Now, this algorithm is hurting the bystanders. IMO the real problem is a public and business reaction to the initial event.

And this peace of news is an inevitable consequence.

RandVal30142 2 days ago 0 replies      
Something people need to keep in mind when parsing this story is that many of the effected channels were not about militancy, they were local media outlets. Local outlets that only gained historical note due to what they documented as it was unfolding.

In Syria outlets like Sham News Network have posted thousands upon thousands of clips. Everything from stories on civilian infrastructure under war, spots on mental health, live broadcasts of demonstrations.


Including documenting attacks as they happen and after they have happened. Some of the effected accounts were ones that documented the regime's early chemical weapons attacks. These videos are literally cited in investigations.

All that is needed to get thousands upon thousands of hours of documentation going back half a decade deleted is three strikes.

Liveleak is not a good host for such outlets because it is not what these media outlets are about. Liveleak themselves delete content as well so even if the outlets fit the community it would not be a 'fix.'

jimmy2020 2 days ago 0 replies      
i really don't know how to describe my feeling as a syrian when i know the most important evidence that witnessed the regime crimes were deleted because of wrong call. And it's really confusing how artificial algorithm get confused between what is is obvious as isis propaganda and a family buried under the rubble and this statement makes things even worse. mistakenly? because there is so many videos? just imagine that may happen to any celebs channel. Will youtube issue the same statement? dont think so.
ezoe 2 days ago 0 replies      
What I don't like about those web giant services is, to get a human support, it requires to start social pressure like this.

If they fucked up something by automation, contacting to human support is hopeless unless you have very influential SNS status or something.

tdurden 2 days ago 2 replies      
Google/YouTube needs to admit defeat in this area and stop trying to censor, they are doing more harm than good.
balozi 2 days ago 2 replies      
Well, the AI did such a bangup job sorting out the mess in comment section that it got promoted to sorting out the videos themselves.
osteele 2 days ago 0 replies      
HN discussion of deletion event: https://news.ycombinator.com/item?id=14998429
DINKDINK 2 days ago 0 replies      
What about all the speech that's censored that doesn't have enough interest or political clout to make people aware of the injustice of its censoring.
williamle8300 2 days ago 0 replies      
Google (parent company of YouTube) already sees itself as the protector of the public's eyes and ears. They might be contrite now but they behave as a censorshipping organization.
norea-armozel 2 days ago 1 reply      
I think YouTube really needs to hire more humans to review flagging of videos rather than leave it to a loose set of algorithms and swarming behavior of viewers. They assume wrongly that anyone who flags a video is honest. They should always assume the opposite and err on the side of caution. And this should also apply to any Content ID flagging. It should be the obligation of accusers to present evidence before taking content down.
pgnas 2 days ago 1 reply      
YouTube (google) has become the EXACT opposite of what they said they were not going to do.

They are evil.

miklax 1 day ago 0 replies      
Bellingcat account should be removed, I agree on that with YT.
762236 2 days ago 5 replies      
Automation is the only real solution. These types of conversations seem to always overlook how normal people don't want to watch such videos. Do you want to spend your day watching this stuff to grade them?
Wekan: An open-source Trello-like kanban wekan.github.io
301 points by mcone  3 days ago   92 comments top 11
tadfisher 3 days ago 10 replies      
If you want to do Kanban right, double down on making it possible to design actual Kanban workflows. Pretty ticket UI with checklists and GIFs must be secondary to this goal.

Things that most actual Kanban flows have that no one has built into a decent product[0]:

 - Nested columns in lanes - Rows for class-of-service - WIP limits (per lane, per column, and per class-of-service) - Sub-boards for meta issues
The actual content of each work item is the least important part of Kanban; it could be a hyperlink for all I care. Kanban is about managing the flow, not managing the work.

[0] Please prove me wrong if there is such a product out there!

bauerd 3 days ago 4 replies      
I thought for a second my touchpad just broke. Might want to make the landing page look less like there's content down the fold
nsebban 3 days ago 5 replies      
While I like the idea of having open source alternatives to the popular applications, this one is a pure and simple copy of Trello. This is a bit too much IMO.
tuukkah 3 days ago 0 replies      
Gitlab needs a better issue UI and perhaps this could be integrated.
Fej 2 days ago 4 replies      
Has anyone here had success with a personal kanban board?

Considering it for myself, even if it isn't the intended use case.

anderspitman 3 days ago 0 replies      
I think lack of an OSS alternative with a solid mobile app is the only thing keeping me on Trello at this point.
thinbeige 3 days ago 1 reply      
Trello got so mature, has a great API, is well integrated with Zapier and hundreds of other services AND is free (I still don't know why one should get into the paid plan, even witn bigger teams, the free version is torally fine) that it must be super hard for any clone or competitor to win users.
number6 3 days ago 3 replies      
Does it have Authentication yet? Last time I checked there were no users or administrations or any permissions
alinspired 3 days ago 2 replies      
what's the storage backed for this app ?

Also shout out to https://greggigon.github.io/my-personal-kanban/ that is a simple and offline board

onthetrain 2 days ago 1 reply      
Is it API-compatible with Trello? That would rock, being able to use Trello extensions.
yittg 2 days ago 2 replies      
what i only want to know is why a chinese-like name: kanban ^_^
Mastodon is big in Japan, and the reason why is uncomfortable medium.com
275 points by keehun  1 day ago   215 comments top 22
coldtea 1 day ago 13 replies      
"Uncomfortable" as in "offends my American puritan-inspired sensibilities".

"Pardon him, Theodotus: he is a barbarian, and thinks that the customs of his tribe and island are the laws of nature". George Bernard Shaw, "Ceasar and Cleopatra".

(Slightly off topic: Feynman had a nice story in one of his books about how the main in a Japanese guesthouse he stayed walked in while he was naked and having a bath. She didn't flinch and just went on about her business like nothing had happened, and he was thinking what a fuss/embarrassment etc that would have caused if it happened in a hotel in the US -- when it's just an adult being naked with another adult present. It's not like everybody hasn't seen genitals before or it's a big deal.)

Animats 1 day ago 3 replies      
At last, something that could potentially challenge Facebook's world domination. Somebody gets a federated social network running with a substantial user base, and it runs into this.

The US position on child pornography comes from the Meese Report during the Reagan administration.[1] The Reagan administration wanted to crack down on pornography in general to cater to the religious base. But they'd run into First Amendment problems and the courts wouldn't go along. So child pornography, which barely existed at the time, was made the justification for a crackdown. By creating ambiguous laws with severe penalties for child pornography and complex recordkeeping requirements, the plan was to make it too dangerous for adult pornography to be made commercially. But the industry adapted, filling out and filing the "2257 paperwork" as required.[2] After much litigation, things settled down, porn producers kept the required records, and DoJ stopped hassling them about this.

So that's how the US got here. That's why it's such a big deal legally in the US, rather than being a branch of child labor law. Japan doesn't have the same political history.

Federated systems are stuck with the basic problem of distributed online social systems:anonymity plus wide distribution empowers assholes. That's why Facebook has a "real name" policy - it keeps the jerk level down.

[1] https://babel.hathitrust.org/cgi/pt?id=mdp.39015058809065;vi...[2] https://en.wikipedia.org/wiki/Child_Protection_and_Obscenity...

rangibaby 1 day ago 5 replies      
I have lived in Japan since I was quite young (late 20s now) and don't see what the problem with lolicon is. It's not my thing, but if someone enjoys it that's their business, they aren't hurting anyone. That's just my gut feeling on the matter, I'm interested in hearing others' thoughts.
kstrauser 1 day ago 0 replies      
I own a Mastodon instance and love its federation options. For instance, I could decide to outright disconnect from that instance (in Mastodon speak, to "block" it) so that my users don't see it (and vice versa). I chose in this case to "silence" it, which means:

- My users can still talk to its users and see posts from people they follow.

- Posts from that instance don't show up on my "federated timeline" (which is a timeline of all posts made by my users and by the people they follow on other instances; great way to find new interesting people).

- I don't cache any media sent from that instance. The default is to cache images locally: if a user on a tiny instance has 10,000 followers on a busy one, the busy one don't make the tiny instance serve up 10,000 copies of every image.

So again, my users can talk to their users just like normal, but no one on my instance sees anything unless they specifically opt in to, and any content I dislike never travels through my network or gets stored on my server. I'm happy with that arrangement.

xg15 1 day ago 2 replies      
I'm all for decentralized communication but I don't think the example of the article is particularly convincing and I wonder if the article is asking the right questions.

So the uncomfortable reason why Mastodon is so popular in japan is that Pixif operates a large Mastodon node which is used to share/discuss questionable images.

Discussions about lolicon aside, does any of this actually has something to do with the detail that Mastodon supports federation?

The article states that decentralisation is important to allow different rules for different communities. However if, e.g. if Pixif disabled federation or switched from Mastodon to something proprietary, would that change anything? Similarly, Reddit is highly centralized technically but - currently - provides freedom for each subreddit to define their own moderation rules (within the restrictions of Reddit, the company).

I feel there is a difference between the "decentralisation" when talking on the social or the technical layer and that difference should be kept in mind.

CurtMonash 1 day ago 3 replies      
Images of all sorts of criminal acts are deemed acceptable, as long as no harm is done to actual individuals during those images' creation.

I've never seen why child porn should be a exception.

That I would think poorly of somebody for enjoying certain categories of child porn is beside the point.

jancsika 1 day ago 1 reply      
> Its a constant struggle for Tor to recruit everyday users of Tor, who use the service to evade commercial surveillance.

That doesn't seem to be a struggle at all. All kinds of users leverage Tor for all kinds of reasons.

The struggle is to recruit everyday users who have the inclination, technical expertise, and rhetorical skill necessary to defend the technology against all kinds of fearmongering tactics.

There is a general lack of such people. If the same set of interests bent on defeating Tor set their sights on TCP, you can bet that technologists would be struggling to find ways to defend it that could resonate with the general public.

klondike_ 1 day ago 0 replies      
This really shows the advantages to a federated social network. People have all sorts of sensibilities about what is acceptable content, and a one-size-fits-all moderation approach like on Twitter will never work for everybody.
SCdF 1 day ago 2 replies      
On the topic of Mastodon, I wonder if the reason it hasn't caught on so much (outside of this use case) is precisely because it's federated.

When a new social network comes along, I often sign up ASAP just to try to grab SCdF, because I'm a human and vane. I will usually give it a bit of a crack once I've done that, but the need to squat my username is a big (and I realise, stupid) driver for me.

I've known about Mastodon for awhile now, but I don't feel any pressure to sign up and check it out because there is no danger of someone else taking my username. Worst case I could just host my own instance against my domain.

emodendroket 1 day ago 0 replies      
Lolicon can also refer to live action stuff where the model is of age but looks younger. Also, the rules on this stuff in the US are quite murky and vary by state, rather than being simply illegal across the board as this article wants to suggest.
bryanlarsen 1 day ago 0 replies      
Porn is too ubiquitous and accepted on the common web to really drive technologies the way it used to.

For example, bittorrent started with porn, but that's not what drove its growth or made it successful. If the credit card companies didn't allow porn transactions on their networks, bitcoin would probably be much larger today. Tor is a similar story, I assume.

nihonde 1 day ago 0 replies      
Saying something with a few hundred thousand users is "big in Japan" is a stretch, at best. There are 130MM+ people in Japan.

I mean, I have an iOS app that has about that many MAU, and I consider it to be basically a failure.

codedokode 1 day ago 0 replies      
> lolicon drawings are prohibited

> gory, bloody and violent pictures are allowed

They must have something wrong with their head.

SCdF 1 day ago 2 replies      
The big surprise to me is that Deviant Art is supposed to be about photography!?
ygaf 1 day ago 0 replies      
>Its a constant struggle for Tor to recruit everyday users of Tor, who use the service to evade commercial surveillance. Those users provide valuable cover traffic, making it harder to identify whistleblowers who use the service, and political air cover for those who would seek to ban the tool so they can combat child pornography and other illegal content.

Wait - I thought people weren't meant to use Tor (thus its bandwidth) if they didn't need it. Or are they recruiting not just any people, but those who will contrive to browse all day / not download heavily?

fundabulousrIII 1 day ago 0 replies      
Thought they were talking about the band and the decibel level.
Eridrus 1 day ago 0 replies      
Most people are on Twitter because of network effects.

Twitter made this a non-issue for lollicon users by banning them, but it's also interesting to note that it sprang up due to support from an existing website.

Most people (myself included) who are dissatisfied with aspects of Twitter are not motivated enough to try to fix them.

mirimir 1 day ago 3 replies      
Well, it's not just pictures.

> After the enforcement, there will still be high school girls out there who are going to want to earn pocket money, and the men who target these girls wont disappear, either, said an official from the Metropolitan Police Department.

> The police come inside, so there are no more real JK girls at the shop. Most of the business is being arranged over the internet, through enko (compensated dating) services.


Global Internet morality is unworkable.

reustle 1 day ago 2 replies      
I was expecting to read about the heavy metal band, Mastodon.
coldtea 1 day ago 2 replies      
whipoodle 1 day ago 1 reply      
Child porn and Nazi stuff have long been really bright lines in user content. Recent events have revealed more acceptance of Nazis and adjacent groups in our society than previously thought, so I guess I could see the taboo against child-porn easing up too. Very sad and scary.
amelius 1 day ago 1 reply      
Sounds similar to the story of BetaMax versus VHS.

Edit: sorry for the brevity, pfooti below explains it well.

A 2:15 Alarm, 2 Trains and a Bus Get Her to Work by 7 A.M nytimes.com
286 points by el_benhameen  3 days ago   300 comments top 15
ucaetano 3 days ago 7 replies      
San Francisco has more than twice the area of Manhattan, with half the population.

The San Francisco Bay Area CSA has about the same population as Switzerland, with 2/3 of the area of Switzerland.

Both have similar GDPs.

If someone wanted to live in Germany, and commute to work in Zurich, with Swiss salaries and German cost-of-living, their commute would be about 1h15min.

The housing and infrastructure problems in the SFBA are purely political, and self-inflicted.

nxsynonym 3 days ago 15 replies      
While the rising cost of housing is an easy target, why not put the pressure on the companies that are driving the influx of workers and out of control cost of living?

If tech companies are drawing people into cities and forcing out those who keep the city itself operating, why not have them subsidize and improve public transportation? Lower income housing? Encourage more remote work or move their headquarters out of the city centers? It seems crazy to me that people get driven out of their homes by real estate developers who re-develop due to tech-wages.

This example is a bit on the fringe, but it does illustrate the daily struggle of many normal people. 2+ hour commute is insane. And before someone comes in with the "why doesn't she get a new job closer to home?", you know it's not that easy - not to mention unfair to suggest that someone should change their entire life because their profession isn't flavor of the month.

jakelarkin 3 days ago 1 reply      
Reporters keep trying to find profiles of the housing crisis but this seems disingenuous. A lot of what this woman is doing is a choice; waking up hours before the train, huge house in Stockton vs condo/apartment closer. She makes $80k/year meaning post-tax $4600/month. She could easily afford a nice 1 or 2 bedroom in Pittsburg or Pleasanton for ~$2k a month, and her door-to-door commute would be well under 80 minutes.
__sha3d2 3 days ago 1 reply      
I was going to come in here and talk about how this strikes me more as a personal choice than a symptom of a systemic problem (i lived well in SF on $60k, and I mean for fucks sake Stockton? that is aggressively far. There are many great closer options.), but who cares.

This woman seems to have a really peaceful existence. It would be nice to have such relaxing routines in my own life, especially in the face of stressful realities like a long commute on public transit. It makes me want to develop the fortitude that this woman exercises every single day.

edward 3 days ago 2 replies      
I think these articles about extreme commutes are interesting. I recommend you read what Mr. Money Mustache has to say on the subject.


jdavis703 3 days ago 2 replies      
One way to make this better is to support the plan to inline the Capitol Corridor and ACE trains with Caltrain via the unused and abandoned Dumbarton rail bridge [0]. Make sure to tell your elected officials you support this.

[0]: http://www.greencaltrain.com/2017/08/dumbarton-corridor-stud...

bartart 3 days ago 1 reply      
It appears that when local cities have control over housing, they make decisions that are good for them, but bad for the state that needs higher paying jobs that generate tax revenue. Plus low density housing is bad for the environment when compared to high density: http://news.colgate.edu/scene/2014/11/urban-legends.html

Other places would move heaven and earth to have a place like Silicon Valley and it seems like California is shooting itself in the foot with this self inflicted housing shortage.

santaclaus 3 days ago 1 reply      
If a municipality decides to open 100 seats of office space, they should be required to zone and approve 100 beds for said workers to sleep in. Otherwise you have the situation where towns like Brisbane can build office complexes for the tax revenue and entirely pass the buck to their neighbors for the cost of housing the new workers.
peterjlee 3 days ago 1 reply      
>Ms. James pays $1,000 a month in rent for her three-bedroom house, compared with $1,600 for the one-bedroom apartment she had in Alameda.

She was forced out of her original apartment but unless she had a new situation that required her to have more bedrooms with a lower budget, some part of this extremity was her choice. I think NYT should've chosen a better example if the point they were trying to get across is "tech boom forcing workers out".

turtlebits 3 days ago 1 reply      
A little misleading as she needs to catch the train at 4am. Still, an almost 3 hour commute is brutal.
ChuckMcM 3 days ago 2 replies      
This is a pretty amazing example. It raises more questions than it answers however. The big one is "Why continue working in SF with this horrible commute?"

Market dynamics suggest that in a 'free' market, on an individual basis, a person seeks to maximize their value received. So in this case, if this was an open market, the (commute + $81K/year) > any other option that she might choose.

So what are we missing that there isn't at least an equal paying job available in the Central Valley that would cut her commute by 80 - 90% ? I can imagine lots of things that might contribute like pension eligibility, or specialization. But "Public Health Advisor for DHHS" seems to be a position that is available in many cities in the state. I would have liked to read what about this job in this city was so important.

And all of that is to the meta question of salary growth has been flat for a long time, but for a long time people felt they had to keep their job at all costs. A what point does the advantage change? 3% unemployment? 2%? What needs to happen so that people are confident enough to say "pay me more or I'll work somewhere else." ?

As a result of the stuff I wonder about, I feel like there might a tremendous amount of tension in the economy that isn't as visible as one might hope. And I wonder what happens when it snaps. Do we get the 10 - 15% stag-flation of the 70s?

deckar01 3 days ago 3 replies      
I paid about 30% of my salary in 2014 for a SRO [0] in San Francisco with a 10 minute commute. I decided that the quality of living was not worth the potential future earnings for staying in the bay area. In Tulsa I pay about 14% of my salary to rent a large apartment with a 4 minute commute.

[0]: https://en.wikipedia.org/wiki/Single_room_occupancy

ghomrassen 3 days ago 2 replies      
What's to stop San Francisco from creating a lot more high-density housing? Doesn't that solve many issues with housing supply? Looking at wikipedia, the density is crazy low even compared to other major cities in the US.
owenversteeg 3 days ago 1 reply      
Why isn't there a good, inexpensive bus? You can fit almost one hundred passengers on a single bus. Buses are pretty fuel-efficient per passenger, easy to reroute if you have more/less demand, and you can go directly to where you need to go. They scale very well - if there are only 20 people or so you can send a small bus, if there are thousands you can send multiple large buses.

Sure, traffic can be an issue, but I'd imagine train delays are roughly as big of a problem.

In this case, it looks like her 3hr 20min commute could become 1hr 30min with a bus that goes from Stockton.

Why has nobody done this?

PopsiclePete 3 days ago 5 replies      
So why do we keep commuting for jobs that don't really require our physical presence?

It was a long hard battle for me to be able to work from home, and yes, I sometimes do miss out on face-to-face interaction, but going to the office 2 instead of 5 days a week is still a huge win - I'm not in a car out there, making traffic worse for you. You're welcome.

Can we please solve the supposed "interaction" problem with some nice digital pens and white-boards and web cams and just .... work from anywhere?

GitHub CEO Chris Wanstrath To Step Down After Finding Replacement forbes.com
260 points by ahmedfromtunis  3 days ago   49 comments top 13
andygcook 2 days ago 5 replies      
Random story about Chris...

I saw him speak at a startup event in 2010 at MIT called Startup Bootcamp. It was probably my first startup-related conference and he was the first talk in the kick off slot at 9am. He gave a great talk recapping the origin of GitHub and how it grew out of another projected called FamSpam, a social network for families.

After the talk I had to run to the restroom and happened to run into Chris out in the entry way. I introduce myself and we starting chatting. As we were talking, people started walking into the event late. They saw us standing in the entrance, and started asking questions on where to go.

Instead of deferring responsibility to someone working at the event, Chris sat down at the empty welcome table and started checking people in by giving them schedules and helping them create name tags. We ended up checking in a few dozen people together while we talked more. No one knew who Chris was when they walked in, and just assumed he was a member of the event staff. I think had they known he was the co-founder of GitHub they probably would have paid more attention to him.

I ended up sending him a t-shirt and he took the time to shoot me back an email saying thanks. The subject line was "Dude" and the text was "Got the shirt. It's so awesome. You rock. Tell your brother yo, too!"

Anyways, I just thought it was kind of cool he took it upon himself to help out with checking people in at the event even though he had volunteered to travel all the way to Boston to speak for free to help out young, aspiring entrepreneurs by sharing his learnings. It always kind of stuck with me that you need to stay down to earth and pay it forward no matter how successful you get.

matt4077 2 days ago 0 replies      
Github is among the best things that ever happened to OSS. Compared to anything that came before, it is a pleasure to browse, it is intuitive, and it has managed to corral millions of people with vastly different backgrounds into a golden age of OSS productivity.

In the 10 years+ before Github, I never even tried to contribute codeeach project had its own workflow, and sending an email somehow felt intimidating. Today, spending an hour here or there to improve it slightly has almost become a guilty pleasure.

So, I guess what I'm saying is: Thank you!

forgingahead 2 days ago 2 replies      
Forbes is ad-infested hell, here is an Outline link:


jdorfman 2 days ago 0 replies      
When I was at GitHub Satellite last year in Amsterdam, I saw Chris walk in to the venue and look around at the amazing production and smile. You could tell how proud he was of his team and the brand he helped create. I am glad to see he is staying with the company, I'm sure the new CEO will need his advice from time to time to keep GitHub great for the next 10 years.
DanHulton 2 days ago 0 replies      
Why the title change? As far as I can tell, it's factually incorrect, as well.

Wanstrath is planning on stepping down and hasn't stepped down yet.

geerlingguy 2 days ago 8 replies      
Any other way of viewing this story? On my iPad with Focus, I just get a blurred out screen when I visit Forbes.com now. I remember it used to show a 'please turn off your ad blocker' dismissible splash screen, but that seems to not be the case any more.
tdumitrescu 2 days ago 0 replies      
I've never met the guy but have a ton of respect for his work - his open source projects like Resque and pjax were awesome for their time. I imagine GitHub has benefited a lot from having real coders at the helm for so long.
nodesocket 2 days ago 0 replies      
> Wanstrath plans to focus on product strategy and the GitHub community after stepping down from the CEO role, working directly on products and meeting with customers.

Just a theory, perhaps they bringing in a new professional CEO for an IPO?

jbrooksuk 2 days ago 0 replies      
> GitHub may seek to become more of a marketplace that can help developers show off their work and take on additional projects, with GitHub taking a portion as a fee, says Sequoia investor Jim Goetz.

They already have a Marketplace offering.

grandalf 2 days ago 0 replies      
Chris is one of the few well-known developers who conveys a deep love of software engineering. Looking forward to reading some of the code he writes in the coming months.
ShirsenduK 2 days ago 0 replies      
The title is misleading. He plans to, but hasn't!
amgin3 2 days ago 1 reply      
PHP_THROW_AWAY1 2 days ago 1 reply      
Wrong title
Elixir in Depth Reading and personal notes digitalfreepen.com
335 points by rudi-c  3 days ago   143 comments top 15
randomstudent 3 days ago 4 replies      
The author doesn't talk a lot about preemptive scheduling, which is probably the best thing about the Erlang virtual machine. This video explains what preemptive scheduling is and why it is extremely useful: https://www.youtube.com/watch?v=5SbWapbXhKo

First, the speaker creates an application endpoint that runs into an infinite loop on invalid input. Then, he shows how that doesn't block the rest of the requests. Using native BEAM (EDIT: BEAM = EVM = Erlang Virtual Machine) tools, he looks for the misbehaving process (the one running the infinite loop), prints some debug information and kills it. It's pretty impressive.

Another great (and lighter) resource, is "Erlang the Movie" (https://www.youtube.com/watch?v=xrIjfIjssLE). It shows the power of concurrency through independent processes, the default debug tools, and the power of hot code reloading. Don't miss the twist at the end.

jesses 3 days ago 5 replies      
The author says it's "hard to use Elixir with services like Heroku because your instances wont find each other by default (the way theyre supposed to)".

I just wanted to mention https://gigalixir.com which is built to solve exactly this problem.

Disclaimer: I'm the founder.

acconrad 3 days ago 5 replies      
All of my pet projects are run on Elixir/Phoenix. If there is a language/framework to learn, this would be it. As approachable as Rails with the benefits of handling real time applications (chats are trivial with Channels) and faster runtimes than Rails (by orders of magnitude).

Happy to help anyone out if they're interested in learning!

sjtgraham 3 days ago 3 replies      
I don't think the Gartner hype cycle applies to Elixir, and I think that is largely because it is built on top of a very mature, production-tested platform (Erlang OTP). I have been using it in production for almost two years without issue on https://teller.io/, so if the GHC applies, it's very elongated!
brightball 3 days ago 1 reply      
The only part of that article I'd clarify is around deployment and devops best practices.

You can deploy Elixir exactly the same as any other language. In some cases, it just means making a decision that you don't need some of the extra features that are available like hot reloading...which every project doesn't need.

You can still use immutable images and take advantage of the built in distributed databases by using mounted block storage if need be.

You can use everything out there. Early on figuring out some of the port mappings and how to handle things was more difficult but as far as I've seen, those problems have mature solutions all around now.

palerdot 3 days ago 0 replies      
If someone is on the fence to make a jump to the elixir world I recommend elixir koans - https://github.com/elixirkoans/elixir-koans

I have not started looking into phoenix as I'm still exploring elixir, but I'm happy to have started learning elixir with koans along with official elixir guide.

sergiotapia 3 days ago 0 replies      
Great write up! Guys, if you're on the fence about learning Elixir - dive on in!

You won't be disappointed and you'll be surprised how many times you'll want to reach out to auxiliary services and find out "Oh I can just use ETS/OTP/GenServer/spawn".

esistgut 3 days ago 2 replies      
I don't like the debugging workflow described in the linked article "Debugging techniques in Elixir", it reminds me of DDD: a separate tool not integrated with my main development environment and requiring extra manual steps. I tested both the Jetbrains plugin and the vscode extension, both failed (unsupported versions, bugs, etc...).To elixir users: what do you think about the state of debugging tools? What is your workflow?
tschellenbach 3 days ago 2 replies      
Yes Elixir handles concurrency better than Ruby. In terms of raw performance it's nowhere near Go, Java, C++ though. Rails/Django are fast enough for almost all apps, if you need the additional performance improvements of a faster language you'd probably end up with one of those 3. Wonder how much need there is for a language that takes the middle ground in terms of performance. Looks very sexy though, really want to build something with it :)
innocentoldguy 3 days ago 1 reply      
I don't know why, because I don't think they are all that similar, but I'm often asked to defend my choice of using Elixir professionally vs. Go. From the article, this is one of the big reasons I chose Elixir over Go:

"Gos goroutines are neither memory-isolated nor are they guaranteed to yield after a certain amount of time. Certain types of library operations in Go (e.g. syscalls) will automatically yield the thread, but there are cases where a long-running computation could prevent yielding."

goroutines also take up about 10 times the memory.

bitwalker 3 days ago 0 replies      
I think the discussion around deployment may have been unnecessarily tainted by their experience using edeliver - it's an automation layer for building and deploying releases, but as mentioned it is configured via shell scripts and because it does a lot, a lot can go wrong.

The basic unit of Elixir (and Erlang for that matter) deployments is the release. A release is just a tarball containing the bytecode of the application, configuration files, private data files, and some shell scripts for booting the application. Deployment is literally extracting the tarball to where you want the application deployed, and running `bin/myapp start` from the root of that folder which starts a daemon running the application. There is a `foreground` task as well which works well for running in containers.

My last Elixir gig prior to my current one used Docker + Kubernetes and almost all of our applications were Elixir, Erlang, or Go. It was extremely painless to use with releases, and our containers were tiny because the release package contained everything it needed to run, so the OS basically just needed a shell, and the shared libraries needed by the runtime (e.g. crypto).

My current job, we're deploying a release via RPM, and again, releases play really nicely with packaging in this way, particularly since the boot script which comes with the release takes care of the major tasks (start, stop, restart, upgrade/downgrade).

There are pain points with releases, but once you are aware of them (and they are pretty clearly documented), it's not really something which affects you. For example, if you bundle the Erlang runtime system (ERTS) in a release, you must deploy to the same OS/architecture as the machine you built the release on, and that machine needs to have all of the shared libraries installed which ERTS will need. If you don't bundle ERTS, but use one installed on the target machine, it must be the same version used to compile your application, because the compiled bytecode is shipped in the release. Those two issues can definitely catch you if you just wing a deployment, but they are documented clearly to help prevent that.

In short, if there was pain experienced, I think it may have been due to the particular tool they used - I don't think deployment in Elixir is difficult, outdated, or painful, but you do have to understand the tools you are using and how to take advantage of them, and I'm not sure that's different from any other language really.

Disclaimer: I'm the creator/maintainer of Distillery, the underlying release management tooling for Elixir, so I am obviously biased, but I also suspect I have more experience deploying Elixir applications than a lot of people, so hopefully it's a wash and I can be objective enough to chime in here.

Exuma 3 days ago 1 reply      
This is a really good write up, thank you.
RobertoG 3 days ago 2 replies      
That was a nice article. Thanks.

I'm curious about the first table in the "Interop with other systems" part.

It seems to say that an Erlang deployment doesn't need Nginx or a HTTPServer, anybody knows how that works?

EDIT: I read the cited source (https://rossta.net/blog/why-i-am-betting-on-elixir.html) and it seems that is the case.

It looks too good to be true, yet. It would be nice, if somebody with Erlang deployment experience, could comment.

gfodor 3 days ago 0 replies      
One thing I didn't see covered that I'm currently trying to understand with Elixir is the relationship between process state and disk-backed state. (For example fetched via Ecto.) Does the role of a traditional RDBMS change in an elixir system? What are the durability guarantees of process state? Etc. Any real world experience would be super helpful to hear about.
brudgers 3 days ago 0 replies      
I can see why a person might choose Elixr over Ruby or vice versa. The tradeoffs between Elixr and Erlang are a lot less clear to me.
Flutter A mobile app SDK for iOS and Android flutter.io
268 points by Mayzie  1 day ago   96 comments top 16
thinbeige 1 day ago 3 replies      
Nowadays the issue with app devlopment is not only having two OSes two develop for (and I doubt that Flutter is a real help here), the much bigger problem is user acquisition.

User acquisition got so insanely expensive for apps that there are few to none business models where you can justify or break even the user acquisition costs.

PascalW 1 day ago 7 replies      
This looks pretty neat, Dart is a nice language.

Flutter looks pretty different from React Native on one side and Cordova/webview based frameworks on the other side. Flutter is not based on webviews, but is also not using the native widgets but instead rendering custom widgets.

To me, this is a little weird. One of the downsides of Webview based apps is that it's harder to align with the native OS look and feel. React Native solves this problem, but Flutter clearly has the same problem.

sathis 1 day ago 3 replies      
The real downside of using flutter is that you can't embed (or inline) any native widgets like video or maps.
JamesSwift 1 day ago 1 reply      
> We test on a variety of low-end to high-end phones (excluding tablets) but we dont yet have an official device compatibility guarantee. We do not offer support for tablets or have tablet-aware layouts.

Thats a pretty serious, and surprising, limitation.

ziggzagg 1 day ago 5 replies      
Why is that Flutter does not have a web target? Everything is nice and fast about it, it's a shame that after building a cross mobile apps, you'll app to start the web app from scratch using another platform.
grey-area 1 day ago 2 replies      
Anyone using this and have experiences to report? I'm thinking of using it for a project soon. Specifically, how does it compare with 2x native apps for Android and iOS. How was Dart as a language, and the bindings to different native SDKs? What problems did you encounter when building apps on both platforms?
victor106 1 day ago 0 replies      
Xamarin.Forms is another option to consider in this space.
devdoomari 1 day ago 1 reply      
I'll jump to flutter when1). scala supports dart backend (just my preference)2). flutter solves 'calling native libraries/sdks' better. (current 'message passing' seems so weak - I want to do video processing/etc)

but for other use cases, flutter seems nice (for 90% of app use cases?)

zanalyzer 1 day ago 0 replies      
Flutter is also the name of a company doing vision based gesture UI that Google bought in 2013 and hasn't been heard of since.


mwcampbell 1 day ago 1 reply      
The FAQ says Flutter has basic accessibility support. I wonder what's missing. If there's a Flutter-based app on the iOS App Store that uses some non-trivial widgets, I'd like to try it out with VoiceOver.
rhubarbcustard 1 day ago 1 reply      
Do you think this is a better option than Apache Cordova? I've been starting to look at Cordova to build some pretty simple apps for business applications.

Does anyone have an opinion on whether Flutter would be a better choice? Why?

I also looked at Xamarin but that seems a little in-depth for what I need, which is basically some data-input screens (using standard Web-style controls) and then to upload the data to an API.

natch 1 day ago 2 replies      
From the FAQ:

>We are aware of apps built with Flutter that have been reviewed and released via the App Store.

Which apps? I'd like to try them out and see how they look and feel.

tomerbd 19 hours ago 1 reply      
if it supported web site as well i would have clicked the link and check it out.
mk89 1 day ago 0 replies      
Looks really promising!
0xbear 1 day ago 2 replies      
Stop trying to make Dart happen. It's not going to happen.
A big, successful trial of probiotics theatlantic.com
255 points by ValentineC  3 days ago   65 comments top 16
rubidium 3 days ago 3 replies      
Here's the nature article (paywall): http://www.nature.com/nature/journal/vaop/ncurrent/full/natu...

"The special mixture included a probiotic called Lactobacillus plantarum ATCC-202195 combined with fructo-oligosaccharide (FOS), an oral synbiotic preparation developed by Dr. Panigrahi." (from https://medicalxpress.com/news/2017-08-probiotics-sepsis-inf...)

Decrease in other infections (respiratory and nasal) is particularly interesting to see.

Now I want a HN MD to weigh in.

manmal 3 days ago 1 reply      
If you want to try L. Plantarum yourself, get the strain 299v if you can. There's one by Jarrow (not affiliated, just made good experiences with it). AFAIK, most other strains of L. Plantarum produce D-lactic acid, while 299v produces L-lactic acid (I've even seen a paper where they used 299v to treat acidosis). D-lactic acid can be hard to metabolize and can lead to acidosis. Probiotics-induced acidosis is a thing - not good, most people with this end up in hospital and need infusions and some kind of intervention for the long term.

Acidophilus generally produces D-lactic acid, while e.g. L casei shirota (Yakult) does not. It pays to take a hard look at the strains in a probiotic.

Ovah 3 days ago 1 reply      
I find the word probiotic to be inherently problematic. It includes any and all microorganisms that have a single positive health benefit, whether or not they're detrimental in other regards. That if a bacteria were to penerate gut epithelium (very bad), it would still be a probiotic if it reduced constupation.

Even if the research supporting the health benefits of a supplemental bacteria is sound, a single study always has a restricted scope.

It's saddening that 'probiotics' have found their way to products such as baby formula, while there is pretty much no regulation governing their sales and use (at least in Europe).

culiuniversal 3 days ago 1 reply      
They initially planned to include 8000 babies, but stopped early. With only a week of treatment they already found a significant decrease in sepsis rates. They stopped because they thought it'd be highly unethical to deprive the other babies of a life-saving treatment. This is a heartwarming example of when humanity and science conflict, but I'm glad humanity won
RcouF1uZ4gsC 3 days ago 2 replies      
The 9% placebo and even the 5% treatment sepsis rate seems very high. According to http://emedicine.medscape.com/article/978352-overview#a6 the US rate of neonatal sepsis is around 2/1000 live births so around 0.2%.

Given this, I wonder how applicable these results would be to neonates in the US.

Mz 3 days ago 1 reply      
Aside from preventing sepsis, it also reduced the risk of infections by both the major groups of bacteria: the Gram-positives, by 82 percent; and the Gram-negatives, which are harder to treat with antibiotics, by 75 percent. It even reduced the risk of pneumonia and other infections of the airways by 34 percent. That was completely unexpected, says Panigrahi, and its the result hes especially excited about. It suggests that the synbiotic isnt just acting within the gut, but also giving the infants immune systems a body-wide boost.

I don't know why they are so surprised. Given that some sources say the gut constitutes up to 70 percent of the immune system, it should be fairly obvious that improving gut health will have such effects.

Furthermore, you can infer a fairly direct relationship between gut health and lung health based on what happens in the body at altitude: You start urinating more to compensate for the thin air reducing your ability to exhale wastes. The body starts clearing them out of the blood by shunting them to the kidneys.

amai 1 day ago 0 replies      
I can recommend https://en.wikipedia.org/wiki/Lactobacillus_reuteri

"Similar results have been found in adults; those consuming L. reuteri daily end up falling ill 50% less often, as measured by their decrease use of sick leave."

icelancer 3 days ago 0 replies      
The trial looks to be exceptionally rigorous and the sample size is very large. This is exciting science to say the least!
csr12928834 3 days ago 0 replies      
Interesting. I was skeptical of probiotics but the evidence seems to be suggesting otherwise.

This kind of builds on meta-analyses showing that probiotic use cuts rates of antibiotic-associated C. difficile infection.

Young children have very unstable microflora systems, so the results make sense from that perspective also.

mattparlane 2 days ago 2 replies      
Why the use of a placebo in a trial involving babies? Surely they would be immune to any placebo effect?
Havoc 3 days ago 3 replies      
My issue isn't on the faith in probiotics side...the problem is which ones do I buy.

It's near impossible to know you're getting the good stuff so to speak.

markdown 3 days ago 0 replies      
Does yoghurt have the good stuff?
TheBeardKing 3 days ago 1 reply      
The study was done on newborns just starting breastfeeding, but doesn't say whether only vaginal births or if C-sections were included.
colordrops 3 days ago 3 replies      
All the products containing Lactobacillus plantarum may see a boost in sales.
quickthrower2 2 days ago 0 replies      
nikolay 3 days ago 1 reply      
I've been using General Biotics 115-strain pre- and probiotic product called Equilibrium [1]. You can find more info in the Science section [2].

[1]: https://www.generalbiotics.com/orders/new/

[2]: https://www.generalbiotics.com/science/

BYTE Magazine's Lisp issue (1979) [pdf] archive.org
248 points by pmoriarty  3 days ago   151 comments top 27
boramalper 3 days ago 11 replies      
When I come across magazines from the past such as this, I keep wondering why and when did we stop writing such beautifully crafted technical articles for the masses and instead turned to advertisement-like pieces on consumer electronics. Look how empowering those articles were by treating you as a creative being, and how passivizing the current articles are in encouraging perpetual consumption.
HankB99 3 days ago 2 replies      
Byte was my favorite magazine of all time - even better than Computer Shopper. ;)

I remember the last page article - Stop Bit. One particularly memorable one described how various professionals would search for an elephant. Some I recall are.

- A C programmer would start at the southernmost point in Africa and travel east until they got to the ocean and then move north and head west to the opposite shore, repeating until they had covered all of Africa. An assembler programmer would follow the same strategy but do it on their hands and knees.

- A college professor would prove the existence of an elephant and leave it as an exercise for the students to actually find one.

- A marketing executive would paint a rabbit gray and call it a desktop elephant.

I wonder if I could find that article. I'll have to see if Archive.org is searchable. Or maybe I can find it by searching today: https://www-users.cs.york.ac.uk/susan/joke/elephant.htm :D

dvfjsdhgfv 3 days ago 3 replies      
The "You can do surprising things when you have 64 kilobytes of fast RAM" ad made me realize how little we appreciate the abundant resources we are lucky to have these days...
keithnz 3 days ago 4 replies      
yeah, lisp is cool, but I'd need a computer to run it on... I'm seriously considering one of those 8070 Series I Business System..... it has dual floppies, 591K bytes of storage a 19" color display, a 60 cps impact matrix computer, and!! they say at twice the price it would be a bargin, so at $7000 it seems the way to go
KC8ZKF 3 days ago 1 reply      
In "About the Cover" on page 4, the editor invites the reader to examine the monolith and "identify the textbook from which these S-expression fragments were taken, and the purpose of the program."

Anybody have a clue?

magoghm 3 days ago 0 replies      
I had a subscription to BYTE magazine. That issue was how I discovered there was this amazing language called LISP.
cmic 3 days ago 0 replies      
My first issue was in 1984 (Forth Issue). I couldn't live without it, then. Until the end of Byte. We had no equivalent source of info in France. It was a fantastic and eclectic source of programming hints, ideas, whatever. I'm now 66 and retired as a Sysadmin. Very good memories.--cmic
lispm 3 days ago 1 reply      
There is another BYTE Lisp issue: February 1988 Vol 13 No 2
dr_ick 3 days ago 2 replies      
265MB PDF!!!

I don't think this thing is 1979 compatible.

huffer 3 days ago 1 reply      
Wow, great! This magazine is precisely as old as I am. I wasn't born with a lisp, just a fondness for it (and now I know why).
tolgahanuzun 3 days ago 1 reply      
Only half of the book is advertising. But I can not deny that it is interesting. Cool
Gargoyle 3 days ago 0 replies      
Also featuring A Preview Of The Motorola 68000!
tannhaeuser 3 days ago 4 replies      
I've got a collection of 1988-90 AI Expert magazines on LISP, Prolog & Co. with interesting design features such as lino-cut-style artwork for (periodical) columns and a special Elite condensed type face for code. Does anybody know if it's ok to put these on a web site with credit when I can't get in touch with the original authors and artists (does archive.org have or need special permission from BYTE)?

Btw, I'd love links to 1986-1996ish articles on SGML and markup technologies.

pinewurst 3 days ago 1 reply      
wiz21c 3 days ago 1 reply      
I know it's totally local to french speakers, but does anyone remember Hebdogiciel or Pom's ? Both were great. The former was crazy and had lots of code in it and a very "free speech" nature (think Charlie Hebdo but for computers) and the latter was all about Apple (2, 2+, 2e, 2c; not the i-thing you're looking for)
daly 2 days ago 0 replies      
I wrote an article on hobby robots (I was working for Unimation at the time, the company that invented robots). Sadly I can't seem to find the issue online.
hultner 3 days ago 2 replies      
What an amazing cover, anyone know if it's available as a poster or standalone graphics?
pagl309 3 days ago 2 replies      
Would be interested to know if these articles are worth reading to learn about the language; i.e. ~40 years later, has the language changed too much to make the content here useful for learning purposes?
delegate 3 days ago 0 replies      
I haven't seen all ads in the magazine, but I notice that most (all?) of the companies are no longer around. Except one.The ad had a bit of a prophecy in it too:"You can't outgrow Apple."
eulevik 3 days ago 0 replies      
Great stuff, good to see again
KirinDave 3 days ago 1 reply      
This is so, damn, fantastic.

I wish I could say I'm nostalgic for it, but it predates my existence. What's it called when you yearn for the style and typography shortly before you were born?

eleitl 3 days ago 0 replies      
I kept raiding the local library archives of the local army university for such back issues in the 1980s and 1990s. Great technical content.
ngvrnd 3 days ago 0 replies      
I believe I still have a copy of that issue. Does anyone remember "M-Lisp" at all?
emmelaich 3 days ago 0 replies      
David Betz's Xlisp first appeared in BYTE I believe. It's still around.
s369610 3 days ago 0 replies      
page 66 discusses the "Model of the Brain" called CMAC (Cerebellar Model Arithmetic Computer). Now I have to read up on what became of that model.
VonGuard 3 days ago 2 replies      

Shit, it's not working...

idibidiart 3 days ago 6 replies      
Lisp is the only high-level programming language that has no syntax. In Lisp, s-expressions are used to encode both form (data structures) and function (algorithms) of computer programs. Since code and data are seen for what they are (two sides of the same coin) the distraction of a real PL syntax is eliminated, and the programmer is able to think more coherently.
Introducing WAL-G: Faster Disaster Recovery for Postgres citusdata.com
212 points by craigkerstiens  2 days ago   61 comments top 10
drob 2 days ago 2 replies      
This is great. Can't wait to be using it.

We've been using WAL-E for years and this looks like a big improvement. The steady, high throughput is a big deal our prod base backups take 36 hours to restore, so if the recovery speed improvements are as advertised, that's a big win. In the kind of situation in which we'd be using these, the difference between 9 hours and 36 hours is major.

Also, the quality of life improvements are great. Despite deploying WAL-E for years, we _still_ have problems with python, pip, dependencies, etc, so the switch to go is a welcome one. The backup_label issue has bitten us a half dozen times, and every time it's very scary for whoever is on-call. (The right thing to do is to rm a file in the database's main folder, so it's appropriately terrifying.) So switching to the new non-exclusive backups will also be great.

We're on 9.5 at the moment but will be upgrading to 10 after it comes out. Looking forward to testing this out. Awesome work!

kafkes 2 days ago 9 replies      
Hello everyone, I'm the primary author for WAL-G and would be happy to answer any questions.
sehrope 2 days ago 2 replies      
I've used WAL-E (the predecessor of this) for backing up Postgres's DB for years and it's been a very pleasant experience. From what I've read so far this looks like it's superior in every way. Lower resource usage, faster operation, and the switch to Go for WAL-G (v.s. Python for WAL-E) means no more mucking with Python versions either.

Great job to everybody that's working on this. I'm looking forward to trying it out.

upbeatlinux 2 days ago 1 reply      
Wow, great work! I am definitely going to test this out over the weekend. However AFAICT the `aws.Config` approach breaks certain backwards compatibility w/how wal-e handles credentials. Also wal-g does not currently support encryption. FWIW, I would love to simply drop-in wal-g without having to make any configuration changes.
jfkw 2 days ago 1 reply      
Will WAL-G eventually support the same archive targets as WAL-E (S3 and work-alikes, Azure Blob Store, Google Storage, Swift, File System)?
craigkerstiens 2 days ago 0 replies      
For those interested in the repo directly to give it a try you can find it here: https://github.com/wal-g/wal-g
jarym 2 days ago 1 reply      
"WAL-E compresses using lzop as a separate process, as well as the command cat to prevent disk I/O from blocking."

Good to see people sticking to the unix philosophy of doing one thing well and delegating other concerns - cat and lzop are both fine choices!

gigatexal 1 day ago 0 replies      
I wonder where python will end up in the next five or so years if Go is continually chosen for concurrent or high perf code things like this.
mephitix 2 days ago 0 replies      
Fantastic intern project, and fantastic work by the intern!
X86BSD 1 day ago 0 replies      
Why would this be a better option than a simple zfs snapshot, zfs send/recv backup and recovery strategy?
       cached 21 August 2017 02:11:02 GMT