hacker news with inline top comments    .. more ..    19 Sep 2017 News
home   ask   best   2 years ago   
U.S. Navy to use Xbox 360 controllers to operate periscopes aboard submarines pilotonline.com
132 points by richardboegli  3 hours ago   49 comments top 14
jedc 2 hours ago 3 replies      
Former submariner here. My thoughts echo those of a ex-submariner group I'm a part of -- it makes sense across the board. It's 1) a cost saving measure, 2) WAY easier to get replacements, and 3) WAAAYY easier to train new people to use it safely and efficiently.
wakamoleguy 3 hours ago 2 replies      
This is one of those simple ideas that makes a lot of sense. I'm glad to hear there are teams working to reduce costs and improve UX for the military.

Microsoft obviously did a lot of user testing with their controller, but they must have been optimizing for a certain consumer price point as well. It makes me wonder what sort of controller you would have if they spent all that focus on usability, but with the same $38,000 price tag the periscope control panel commands.

danschumann 2 hours ago 0 replies      
That's a good sign. I'm sure the alternative is getting a quote from some contracter who would want to reinvent the wheel for millions of dollars, and probably a worse result. Microsoft already did the legwork of development. More consumer products ( as long as there are sufficient fallbacks / quality control ), in military operations, is a-ok in my book.
newman8r 39 minutes ago 1 reply      
I have been using an xbox one controller instead of a mouse for 4 or 5 months now. I use the qjoypad package to map the thing. It's really great and I believe it's why my chronic right-side back pain has completely vanished.

So as someone who uses this type of input device, I think it's a great fit for military use.

miketery 3 hours ago 0 replies      
That's great. Those controllers are awesome, durable, easily replaceable, cheap, portable, intuitive, and have undergone an ungodly amount of user testing. Most importantly the recruits will typically have experience having used them in the past.
rawoke083600 7 minutes ago 0 replies      
$38k for the current version ! No Way !
org3432 42 minutes ago 0 replies      
Well I'd rather use awsd and a mouse personally.
0xcde4c3db 3 hours ago 0 replies      
While they're not likely to be the best control experience you can possibly get for a given application (as any serious FPS or fighting game player can tell you), the mainstream official console controllers really are engineering and design feats in their own right. This is one of those things where getting it 90% right is easy but the last 10% will make people hate using it.
mkhalil 3 hours ago 2 replies      
I also am one to agree that I think this is a great step forward with maybe trimming off our huge military expenditures, particularly ones that come out as "leaks" because they are so embarrassed of them.

I'm sure the U.S. navy will be taking a look at the hardware and maybe even flashing their own firmware on them, lest we open ourselves up for a catastrophic backdoor.

jedberg 3 hours ago 2 replies      
> I can go to any video game store and procure an Xbox controller anywhere in the world, so it makes a very easy replacement.

Or the Rec room.

bitwize 20 minutes ago 0 replies      
Why not? It's a good controller, cheap, and readily available.

I'm reminded of the in-joke in Metal Gear Solid 4 that saw Solid Snake use a PS3 controller to joystick around the "Metal Gear Mark II" recon drone. Then, as early as later that same year, actual soldiers were operating actual drones with Xbox controllers...

jonathankoren 1 hour ago 1 reply      
I've noticed xbox style controllers are very common in the military now. I'm thinking it's a generational thing. It wasn't that long ago that you would have seen a big joystick covered in buttons serving this purpose.
dba7dba 1 hour ago 0 replies      
South Korean Navy recently retrofitted their Type 209 and other submarines with digital camera based pericopes, replacing traditional periscopes.

When the South Korean Navy procurement office requested proposal from traditional periscope manufacturers, the quoted price was astronomical.

S Korean Navy said no thanks and had local manufacturers develop/test/manufacture a new digital camera based periscope at a FRACTION (like 10% ?) of the quoted price from traditional manufacturers.

unsigner 56 minutes ago 0 replies      
The Xbox 360 wired controller is a great piece of kit, easily the best controller ever as a balance of price, reliability, and simplicity of use - the API has literally two functions, "what buttons are pressed" and "set vibration to X". The initial Xbox One controllers were a clear regression, although the recent Xbox One X are much better again. However, for use cases such as the submarine in TFA, wireless is a liability, not an improvement.
W3C abandons consensus, standardizes DRM, EFF resigns boingboing.net
1981 points by guelo  11 hours ago   704 comments top 66
RcouF1uZ4gsC 10 hours ago 10 replies      
Basically, unless you are writing a browser with decent marketshare, you defacto have no voice in making the standards. Basically, the only voices that matter are Mozilla (Firefox), Apple (Safari), Google(Chrome), and Microsoft (Edge/Explorer). Despite what any standard says, web developers are going to go by the behavior of the browsers do. The only company on the list of browser makers that really has any desire to try to exclude DRM is Mozilla, and unfortunately, if they do that, the users will switch to the browser that makes watching Netflix easiest.
d--b 1 hour ago 1 reply      
What people don't get is that EME sets a strange precedent in the history of HTML.

Web browsers have always been very hackable. HTTP meant you could always look at the traffic being exchanged. And because there was little point in obscuring anything, web browsers allowed you to look into and modify everything:

- view/modify document source

- view/modify DOM

- debug script

- and so on

This is how CSS was defined too. It was supposed to be a compromise between how the user liked things styled, and how the vendor suggested styling the content.

EME brings something new to the table: locked LOGIC. This not a bad thing per se but takes html in a completely different direction than what it used to be.

The main concern is not DRM. The main concern is that this is a step in a direction where web browsers become unscrutable virtual machines running code that cannot be looked into. It's basically a step turning HTML into silverlight. This may happen for instance if the gaming industry decides that they need EME for in-browser games.

And most importantly this is done for all the wrong reasons: EME cannot stop anyone from copying the rendered content. And it certainly doesn't prevent anyone from downloading copied content.

So EME is just a stupid thing that technology-dumb media dudes are imposing on web developers for no reason and that may have far reaching consequences on the future of html... That's what's worth talking about.

favorited 10 hours ago 2 replies      
For the record, the EFF only joined the W3C to fight EME in the first place. They're not resigning in protest, they're leaving the group because they didn't win the single battle they joined for the purpose of fighting.
sboselli 10 hours ago 16 replies      
Shame, shame, shame.

We're losing the internet day by day, if we haven't done so already.

I've seen people and posts here and there calling for attention on these issues, but imho it's all too subtle. We should start using harsher terminology for what's actually happening. This is flat out CORRUPTION, and I'm not seeing anyone express it as such.

It's probably too late already, and unfortunately, this is merely a reflection on what's happening in the world in the larger geo-political context. Corruption everywhere.

DCKing 9 hours ago 6 replies      
Can anybody explain to me what will change because of this decision? DRM has been very much part of the internet since 1995 when RealPlayer was released. DRM has been part of content delivery ever since then. It has not ever seem to have decreased in popularity, quite the opposite. The browsers that 99.9% of people use have already implemented this standard for years anyway. It's not obvious to me that this decision changes anything (it seems this is the status quo already) but maybe there's something I'm missing.

What's going to change from today to tomorrow because of this decision? Or is the meltdown here just people now realizing that the battle is lost, even though it was lost already a long time ago?

One thing I do understand is that this contended decision is a serious break from tradition and apparently a dick move (although I'd need to see some additional sources on that). But that doesn't seem to be the main topic of the discussion in these comments anyway.

bad_user 9 hours ago 0 replies      
The EFF is right for resigning. There's no reason for EFF to be part of a supposed standardization group that is ignoring complaints, especially for a recommendation for a technology that puts freedoms at risk.

And to put salt on injury, the W3C is claiming that they couldn't reach consensus on a covenant regarding anti-circumvention regulations, however they are now making this recommendation without consensus, which seems to me to be disingenuous.

The W3C is clearly and has always been a charade.

And people won't forget that easily, just like we haven't forgotten the days when they were holding the web back. So if they were worried about becoming irrelevant by not adopting DRM, well, they just became irrelevant regardless. Might as well admit that the standards are made by two or three companies which control the market and stop this circus.

bhhaskin 10 hours ago 4 replies      
We are allowing large corporations to dictate and push the web toward a closed system. A future where there has to be an App for that, and if you don't keep your head down you will be censored and cutoff from the rest of the online world.
guelo 10 hours ago 1 reply      
For perspective here is W3C CEO's post about it https://www.w3.org/blog/2017/09/reflections-on-the-eme-debat...

And Tim Berners Lee's original decision https://lists.w3.org/Archives/Public/public-html-media/2017J...

pdimitar 9 hours ago 2 replies      
It's time to abandon the major web standards and start devising our own infrastructure. Better stateless protocol, end-to-end encryption on every connection -- no exceptions! -- decentralized encrypted vaults of cached assets (where every user donates a few or few hundred gigabytes of disk space to participiate -- effectively becoming a node in a decentralized CDN).

We've left the corporations run wild for far too long. They've been stupid, slow and it took them A LONG TIME to catch up. And eventually they did. All the while all of us did absolutely nothing. At least myself.

^ All of that is an idealistic revolutionist talk, I am aware. Had I had the free time and reserve capital however, I'd be dead-serious about starting such an effort.

bigfoot 10 hours ago 3 replies      
The saddest part of this story is that Netflix/Amazon/younameit will continue to ignore and block Linux users as the niche market they are -- even if a future Firefox or Chrome version comes with the new standardized DRM everyone asked for. Lose/lose situation.
cromulen 10 hours ago 14 replies      
I'm a bit uneducated when it comes to the cryptography involved, but I'm wondering why people here are so determined DRM can't ever work?

Is it because someone will somehow get a copy and upload to torrents/streaming sites which of course won't have DRM. Thus only potentially annoying legitimate (eg. Netflix) users? Or are there other concerns?

sigi45 9 hours ago 3 replies      
I think i don't get it.

A company called Microsoft builds a software which is able to render html. This company makes money and talks to another company called google which also is making money by selling movies and stuff and which builds a software which is able to render html. And those two companies are talking to a third huge company, who makes a shit ton of money and also makes a software which is able to render html.

All those _companies_ decided together, that they still are using some form of drm and want to standardize it, to make it easier for there consumers.

Now a few people, who are using the software from those companies thought this code is written for free and without strings attached?

I mean i do understand the risk but still i'm surfing around and use my software, written by companies, to surf primarily on company sites and not for surfing on other private pages.

Even linux and other free software is written, primarly by people who get there money from companies right?

I don't think that DRM would be gone if no EME exists.

Paul-ish 9 hours ago 1 reply      
My fear is that DRM for video content will quickly become DRM for text content. Say goodbye to adblocking and tracker blocking.
phkahler 10 hours ago 3 replies      
Now is the time to make it possible for individuals to use DRM when they publish videos online. How often do media companies show peoples stuff on TV and such without permission? Of course youtube ToS allow that, but this should all be changed ASAP. When large amounts of content actually come from individuals, it's the individuals rights that need to be protected. When will we see DRM for the masses?
AnthonyMouse 6 hours ago 0 replies      
Case in point why everyone should do this:


eridius 7 hours ago 3 replies      
> 58.4% of the group voted to go on with publication, and the W3C did so today, an unprecedented move in a body that has always operated on consensus and compromise.

What exactly is the EFF saying should have happened? More than 50% voted to go ahead with it. The majority voted for it. I don't see how the W3C going with the majority vote is a dick move. Consensus and compromise is obviously very important, but when one side is strictly anti-DRM, it's pretty hard to compromise. This just seems like the EFF being bitter that they lost and trying to disparage the W3C.

hellbanner 9 hours ago 2 replies      
"This specification does not define a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems. Implementation of Digital Rights Management is not required for compliance with this specification: only the Clear Key system is required to be implemented as a common baseline."

Does not define DRM .. I am seeing a conflict with the title

alexnewman 8 hours ago 0 replies      
I wish I could talk about how bad widevine is but I can't because NDA. Y'all fucked
justonepost 8 hours ago 0 replies      
Yay digital serfdom! Long live the feudal lords! http://www.newsweek.com/silicon-valley-private-property-and-...
projectileboy 7 hours ago 0 replies      
It's utterly heartbreaking to live in an age where you watch the best things come to life, and then die.
kibwen 9 hours ago 1 reply      
Remember the corporate sponsors of the original EME proposal in 2012: Google, Microsoft, and Netflix https://www.w3.org/TR/encrypted-media/
Endy 10 hours ago 0 replies      
Looks like I'm sticking with the EFF only from now on.
wnevets 9 hours ago 0 replies      
The web has made it this far without DRM being part of the standard, why exactly do we need it now? The death of flash?
dottrap 9 hours ago 1 reply      
Did Apple actually support this proposal? Seems like their own self-interest would want them to reject this. The lack of a standardized web DRM would push developers to native apps which benefits Apple. And web DRM doesn't benefit the iTunes eco-system.
makecheck 3 hours ago 0 replies      
Battery life and data minimization should take precedence over extras. It is time to standardize on precisely the bare minimum necessary to render content, and that certainly excludes DRM and auto-play video ads and other endless cruft.

Simply put, I don't want my battery burning through unnecessary restrictions-addition software (both downloading and running).

MrMid 10 hours ago 4 replies      
Doesn't Firefox block DRM content by default? If they continue to do so, and if Chrome does so, then this shouldn't have much effect. If most peoples browsers block it, apps shouldn't use it.
dejawu 5 hours ago 0 replies      
> In their public statements about the standard, the W3C executive repeatedly said that they didn't think the DRM advocates would be willing to compromise, and in the absence of such willingness, the exec have given them everything they demanded.

This sentence in particular fills me with rage. These are people and groups who have refused to innovate in the face of the web and have used their clout and momentum to ensure that they never have to again.

So much for the democratization of media the web was supposed to bring. Money still speaks louder than anything else.

5_minutes 10 hours ago 3 replies      
Who is this director exactly that forced this on one-sidedly?

I guess he got something out of this. What a disgrace.

twiddo 10 hours ago 0 replies      
I'm afraid of this becoming the status quo. Everything is going to be a binary blob that you either download and run or you don't. It's really shortsighted to say "if Hollywood doesn't get DRM, you won't get Netflix". The market is there, it just wouldn't have been as easy for Hollywood to do so.

Now we have made it easy (and even standardized it!).

Fej 10 hours ago 3 replies      
On one hand, this is terrible for freedom in our software; on the other, this isn't the "death of the open Web" that some are proclaiming.

The media groups want DRM and they will get it. This doesn't mean that we are going to lose all freedom on the Web. It's a step in that direction, certainly, but we're sure as heck not there yet.

quickthrower2 8 hours ago 0 replies      
This is the next century

Where the universal's free

You can find it anywhere

Yes, the future has been sold

(Blur 1995)

eh78ssxv2f 9 hours ago 1 reply      
Can somebody explain what exactly is happening here? What were the pros/cons of the move? e.g., it is possible that browsers are in a tight spot: If they fail to provide certain functionality, then content providers just move to native apps. Was that the tradeoff here?
l3m0ndr0p 6 hours ago 0 replies      
Maybe now's the time to abandon the W3C - Maybe we can encourage the EFF to create a "Free Web Consortium." Sort of like "let's encrypt." I think this would better server a free and open web for the 21st Century and beyond. It appears, based on this information from the EFF & their exit from W3c, the W3C has become corrupted at some level.
thuuuomas 5 hours ago 0 replies      
Netflix is money, but the real DRM atrocities will surface in the ed-tech long tail.
donatj 11 hours ago 5 replies      
So this is how the open web dies.

Can we fork the W3C into a non-corporate puppet now?

ngcazz 9 hours ago 0 replies      
The attack on consumer rights continues...
jmull 7 hours ago 0 replies      
Well, the W3C only matters to the extent they have credibility.

Not that one thing will break that. But there will be future efforts where a positive outcome hinges on the credibility of the W3C's process. Those may not go as well.

andrewflnr 8 hours ago 0 replies      
What actual power does the W3C have to harm (or protect, for that matter) security researchers?
Crontab 10 hours ago 3 replies      
Tim Berners-Lee must be rolling in his grave.
discordance 10 hours ago 1 reply      
Why is this DRM standard a bad thing?
otakucode 7 hours ago 0 replies      
Time to start cracking.
ultim8k 9 hours ago 0 replies      
Instead of creating the right tools for making web apps a lot more native, feature rich and consistent, they just make the favours of a couple media companies like Spotify, Netflix and Google. I hope the W3C dissolves after that._
sev 10 hours ago 3 replies      
DRM can't work in theory, but it is working in practice.

If someone writes a piece of software that allows downloading of DRM-ed content (without losing quality, and playable anytime) from the big names (Netflix, Amazon, etc), then this battle would be won.

sigzero 9 hours ago 0 replies      
"Since when did the W3C abandon reason for madness?" -- Gandalf
HashThis 5 hours ago 0 replies      
Our democracy has a problem. Crony capitalists will sell out to corporations. They will reject the democratic process in order to sell out to corporations to have power to monetize those citizens. They don't care about protecting citizens.

DRM in standards == force == freedom removed

Chardok 10 hours ago 1 reply      
Can anyone explain what are the possible implications of this?

I am imagining a bunch of annoying add-ons to access news articles and what not, but is there a potential to carry over to smaller or niche spaces?

olivermarks 10 hours ago 2 replies      
where does this leave Brave and Opera, two browsers I use regularly?
Osmium 9 hours ago 2 replies      
I am so mixed about this. In principle, this is a terrible idea, and I share many of the concerns in this thread -- I am not a fan of DRM. But as a consumer/end-user, I'd much prefer a standard DRM over Flash/Silverlight any day of the week.

The real question is how we get rid of DRM in the long term. Piracy isn't going away. Hopefully content owners will one day realize the economic cost of implementing DRM isn't worth its return, and only serves to alienate paying customers. I imagine it might take some years for them to realize this however.

AndyKelley 9 hours ago 2 replies      
I predict that companies will start using EME to deploy all their app code to keep their front-end more closed source.
oconnor663 10 hours ago 3 replies      
Whenever compromise fails, both sides blame the other for refusing to give any ground. Obviously browser vendors have a lot more power than the EFF does, and don't necessarily need to compromise as much to get what they want. But I'm curious, for their part, did the EFF actually offer any compromises in defense of consensus?

Edit: You guys are totally right, I missed it in the original article. Shame on me.

Manozco 5 hours ago 0 replies      
What will we answer in twenty years when our children will ask: "Daddy, where were you when they made the web a such shitty place? "
rurban 56 minutes ago 0 replies      
There's only one solution. The director needs to resign immediately
mabynogy 9 hours ago 0 replies      
Someone should fork w3c.
idbehold 10 hours ago 0 replies      
Ideally all content publishers start to really depend on this "feature" and then one or two of the major browser vendors a few years down the line suddenly stop enforcing any restriction the DRM had. Now the publishers have to spend a bunch of money to move back to the plugin style DRM.
acidtrucks 9 hours ago 0 replies      
Maybe this is terrible. Maybe this is the beginning of something totally new. There is nothing about WWW that prevents us from using totally different technologies, other than being really pretty good.
shmerl 9 hours ago 1 reply      
W3C is morally dead. Quite a sad development.
spdustin 9 hours ago 2 replies      
Is my math off, or is 58.4% a majority?
unlmtd1 8 hours ago 1 reply      
Good! Let them do that, and let us keep working on things like ipfs, blockchain naming systems, matrix and host identity protocols. The more they try to corrupt the web, the more energy goes into fixing the broken architectures. Then one day, nobody will use the broken DRM net. Politics is a programmer's most wasteful use of his time. Code them out of business.
sysdyne 8 hours ago 0 replies      
https://www.youtube.com/watch?v=h94ZKGVg-B8W3C doesn't care about freedom. It's good to know the real face of W3C.
hbk 9 hours ago 0 replies      
I wish we could fork W3C.
lewisinc 10 hours ago 2 replies      
What benefit is there in the EFF resigning? I'm not educated on the issue as well as those on the committee, but it feels like not having the EFF on the committee at all is going to do more harm than good.
ianamartin 6 hours ago 0 replies      
I sort of hate the way things are turning out and also am not surprised. When I was 10 years old in the late 80s, everything was open to you if you wanted it.

My next door neighbor has a 10 year old son who wants to learn programming. I gave him an older laptop of mine and offered to do some coaching with him about learning to program on the condition that he always has to do his homework first before we do any programming work. And if he hasn't got his homework done or is having problems with it, I'll help with the homework first.

I had a pretty cool person in my life that did that for me when I was a kid. So I want to pay it back.

But when I think about things, man . . . it was wild as a kid. You could do anything on the internet in the 80s and 90s. It was the wild west.

Now days, I'm in the back yard teaching this next door neighbor's kid, and I'm like, "Yeah, maybe don't do that. That could get you in trouble."

When I was a kid, it was always, "Do it! Can't hurt that much!"

It's different now, I think. People are less free to explore for its own sake.

I could be wrong, but I think there was a golden moment of freedom on the internet that is past. And I'm glad I got to live in that.

swayvil 10 hours ago 1 reply      
The wealthy win again!
revelation 10 hours ago 0 replies      
What's the point, if this is another one of those "industry groups" they could at least have the decency to make it a great corporate junket with a meeting in Las Vegas or something.
nickysielicki 10 hours ago 0 replies      
So let me get this straight...

My graphics processor supports this encryption. My monitor supports this encryption. My kernel supports this encryption. And we're going to draw the line at EME-- the glue that sits between my web browser and all of this infrastructure? That's the line that we just can't afford to cross?

It's not the fault of the consumer for purchasing hardware that supports this stuff? It's not the fault of the OS developers for supporting it? It's squarely on the W3C and browser vendors for making it accessible?

Seems to me like the EFF is going full Stallman for no actual purpose, and to the detriment of their reputation and role in future W3C discussions.

Ultrafast single TCP packet audio/visual experience github.com
146 points by willlll  6 hours ago   34 comments top 17
richdougherty 5 hours ago 3 replies      
For those you are wondering, this is a webserver that serves an HTML page in a single TCP packet. I guess 1500 bytes to avoid fragmentation - shows as 1.1KB in Chrome's network view. The HTML contains embedded JavaScript that runs a simple demo with animated ASCII and playing a LOUD changing audio tone.

You can visit the demonstration website here: http://packet.city/

You can see a screenshot of the demo here: http://www.p01.org/128b_raytraced_checkboard/

zengid 3 hours ago 2 replies      
Pro-tip: If you are doing additive synthesis and want to stack sine-waves in a harmonic series, please scale each successive n harmonic to an amplitude of 1/n. It creates a much more tolerable experience.
TCM 4 hours ago 1 reply      
Hi, this website is really loud and if you are using Headphones I would mute it or turn them down a bunch. It also will autoplay the sound.
davidmurdoch 4 hours ago 0 replies      
Interesting to see DEFLATE over GZIP here, as it is something I used to recommend maybe 7 or so years ago. I collected results for several years and began seeing more browsers dropping support for raw deflate (or switching to zlib deflate).

I accidentally let my hosting account expire years ago and lost the server code and db for it, but managed to pull the HTML from the way back machine. I've backed up the results here: https://davidmurdoch.com/compression-tests-results/. It's all pretty outdated now, but rather fun to look at.

Anyone know if this is using raw DEFLATE or ZLIB (HTTP 1.1 DEFLATE)?

robocat 5 hours ago 0 replies      

summary: A demo scene page that is smaller than a single IP frame and uses some flags to avoid other round trips.


The page itself and view-source only work on some browsers. Use wireshark etc to see what it actually does.

kozak 27 minutes ago 0 replies      
Looks like the browser spends most of the loading time resolving DNS.
nqzero 5 hours ago 1 reply      
>the greatest website to ever fit in a single TCP packet

a blank page would be an improvement - that was seizure inducing

ricardobeat 5 hours ago 1 reply      
What is it? Nothing happens on mobile Safari.
davrosthedalek 5 hours ago 1 reply      
Does the high pitched sound make my Amazon Echo order something?
bpicolo 5 hours ago 1 reply      
This will upset any cat in the vicinity
wmf 5 hours ago 0 replies      
Also ultraloud. Turn down the volume before clicking.
mkj 4 hours ago 1 reply      
Nice work. I guess it'd just as quick if it sent up to ~4kB - that's the normal TCP initial window? https://tools.ietf.org/html/rfc3390
remar 5 hours ago 0 replies      
Been years since I went through Beej's socket programming tutorial but I still recognize his code ;)
drvdevd 5 hours ago 0 replies      
Nice. I was just reviewing some TCP fundamentals with my friend for his class, specifically sliding windows and window control, etc.

It's as if they took what we were just reading in a compsci textbook for class and directly monetized it.

kitotik 2 hours ago 0 replies      
Desktop Safari: Can't find variable: AudioContext

No room for vendor specific prefixes :|

rosstex 5 hours ago 0 replies      
Is there a breakdown of the expanded HTML/JS?
quickthrower2 3 hours ago 0 replies      
Any direct link? Do I need to build this mofo?
Gas Pump Skimmers sparkfun.com
98 points by whalesalad  4 hours ago   68 comments top 9
watertom 28 minutes ago 3 replies      
Chips aren't anymore secure, I can read the data off my card with a standard chip reader, and I have. The same data on the chip is on the mag strip.

Just because you are using the chip doesn't mean you are doing an EMV transaction. The unique transaction codes only happen with and EMV transaction, almost every time you dip your card it's a regular old card transaction, just as if you swiped the card. Why?

Getting EMV certified requires every part of the transaction chain to go through unified system testing, for each combination of hardware, software, card type, processor, issuing bank.

I've been eyeballs deep in this nonsense for the last year or so. We just can't justify the expense of getting EMV certified, so we just accept the chip and do a regular transaction.

As a consumer you have no way of knowing if your transaction is and EMV transaction or just a chip enabled regular transaction.

kbart 1 hour ago 4 replies      
I still don't get why all credit cards have magstripe when we have chip&pin for decades. Yes, I know it's required by a standard, but I see no justification for that. I have never used magstripe , but it's right there on my every card and there's no way to disable it (I've asked my bank for it), unless you deliberately destroy it (scratching or similar manner).
anonymousjunior 1 hour ago 0 replies      
In the second image here [1] there's a security seal on the payment closure as a whole; I'd imagine a simple security seal along the side of the card scanner intake would thwart most would-be card skimmers, no?

At that point the employees could just make it part of the standard inspection and it'd be more obvious to customers if they were missing.

[1]: https://cdn.sparkfun.com/assets/learn_tutorials/6/9/4/Gas_Pu...

aidenn0 2 hours ago 1 reply      
It seems like one could make a bluetooth snooper that looks for people who connect to the skimmer? Then you could catch skimmer users when they download the data.
new299 2 hours ago 4 replies      
Nice writeup. But in the article they write:

> Are you angry that your card has been stolen, again? Contact your local congress person or senator and ask them to pass legislation that fines gas stations $100 for every card that is discovered on a skimmer in one of their pumps. Its ultimately up to the gas stations and pump manufacturers to secure their pumps.

Suggesting a solution like it's an easy fix always bugs me a bit. Would a 100USD fine actually work here? The issue seems more with the fact that the US hasn't upgraded to a chip&pin style system. You might end up just costing the gas stations more money, when they don't actually have the power to do much about the problem.

It feels a bit like victim blaming, when in this case the victim has little choice but to work with the system as they find it.

Brushfire 2 hours ago 6 replies      
I'm curious if they are also logging the zip code entered via keypad. I can't remember the last time I used a pump without zip code validation.
stevenh 1 hour ago 1 reply      
I was skimmed at a gas station last winter. I pay with cash now.
jaredandrews 2 hours ago 2 replies      
Very interesting, just installed the app they created. Going to be driving all over the northeast for the next few months. I wonder if I will find any skimmers...
Pfhreak 2 hours ago 4 replies      
> The Skimmer Scanner is a free, open source app that detects common bluetooth based credit card skimmers predominantly found in gas pumps. The app scans for available bluetooth connections looking for a device with title HC-05. If found, the app will attempt to connect using the default password of 1234. Once connected, the letter P will be sent. If a response of M' then there is a very high likelihood there is a skimmer in the bluetooth range of your phone (5 to 15 feet).

Why isn't this just a part of the gas pump itself? (Or the payment station or whatever.) Is there a market for someone to make skimmer detector addons for gas stations? (If not, why not?)

Fresh IDE flatassembler.net
77 points by mutin-sa  3 hours ago   19 comments top 4
truncate 1 hour ago 2 replies      
Just curious if they wrote the IDE in assembly (my instincts say not, but they do make assembler). The login is giving 404 https://fresh.flatassembler.net/fossil/repo/fresh/fossil/rep...
chrisparton1991 2 hours ago 3 replies      
The HTTP response code for fresh.flatassembler.net assets is "200 She'll be apples", I didn't know 200s could be customised like that :)
dguaraglia 2 hours ago 1 reply      
Oh boy, the memories from all the "IDEs" there were for NASM. In the end, it was just easier to use your favorite editor, because most of them just had a pretty color scheme for assembly. This looks pretty sweet, though.
TylerE 2 hours ago 2 replies      
It would make a better first impression if your hero screenshot didn't have font rendering from circa 1995.
Top medical experts say we should decriminalize all drugs washingtonpost.com
133 points by anythingnonidin  4 hours ago   71 comments top 13
fsloth 2 hours ago 4 replies      
Finally. I hope this will signal a turning point on this insane approach to national health in all countries. Drugs are a health problem, not a criminal problem.

Lone experts have been fired in western countries when they have expressed this common sense sentiments alone [0]. I hope this groups fares better.

Nixon started the war on drugs anyway as a mean to attack left and black activists [1]. Any previous legislation for control has been instigated on behalf of race and class warfare.The fact that some drugs have been the staple narcotic in some groups has been used as a control mechanism upon those groups by criminalizing the substance.

Substance distribution should be controlled by law. But that is status quo anyway - governments control the distribution of any number of dangerous substances at any point.

[0] https://www.theguardian.com/politics/2009/oct/30/drugs-advis...

[1] https://qz.com/645990/nixon-advisor-we-created-the-war-on-dr...

anythingnonidin 1 hour ago 2 replies      
Some reasons for decriminalization of use and possession, summarized:

(i.e., removing criminal penalties or making it no jail time possible, but not government supplied)

- War on drugs has failed. Criminalization has consequences on community - in many cases, most of the negative effects come from the criminalization, not the drug.

- Consensual crimes that dont harm others shouldnt be crimes.

- Drug prohibition doesnt seem to decrease use Portugal. (Also perhaps Czech, Italy, Spain?)

- Drug war enforcement costs a ton, we could save tax money.

- 1.2 million people arrested for drug possession in 2015. http://www.drugwarfacts.org/chapter/crime_arrests#arrests

- Prisons are crowded. Would reduce this a little.

- Easier for addicts to seek treatment.

I think the strongest reasons are that drug use should be treated as a health issue and not a criminal issue, and that consensual crimes that don't harm others shouldn't be crimes.

What is the single strongest reason for or against decriminalizing all drugs, in your opinion?

ars 2 hours ago 3 replies      
And not just recreational drugs. All drugs.

We trust people to manage their own life, this includes medicine. Maybe insurance won't pay for it without a Dr, but if someone wants to self pay, that's their choice.

Synaesthesia 2 hours ago 2 replies      
As Chomsky said, the way to deal with drug abuse is through education and treatment. Thats how we successfully reduced smoking, drinking coffee and other unhealthy habits in the USA. Not by throwing people in jail.
marze 34 minutes ago 1 reply      
Bottom line is deaths from overdose.

Drugs criminalized, USA:

150 deaths/million people/year

Drugs legalized, Portugal:

3 deaths/million people/year

SilentCrossing 23 minutes ago 1 reply      
The article is about "decriminalization of all nonviolent drug use and possession". This sounds noble, but as we found out in the Netherlands you will have to make a distinction between soft drugs and hard drugs. Hard drugs is always associated with crime and most of the time violent crime and gangs.

So I would say. Been there done that. We are cracking down on this type of policy, since 'education' does not work and it seems that prisons do not make the criminal as some other comments claim... Well they can always wish for it, but reality does not care either way.

jeffdavis 1 hour ago 2 replies      
"scientific consensus"

I am very troubled by this phrase -- it's almost an oxymoron. Science is not democratic, and scientists aren't anointed arbiters of facts.

Anyone can be a scientist simply by following the scientific method and collecting data in good faith; and a lone outsider with new data can challenge 100 years of "consensus". Obviously they are subject to challenges themselves, or if it's unlikely enough they might reasonably be ignored (e.g. a known charlatan saying they observed cold fusion).

Did this terminology start with the "scientific consensus" on anthropogenic climate change (which I do not dispute, by the way)? I think I understand why it was used in that debate, but I don't think it was a good precedent. Now it's being used to directly apply to policy ("growing scientific consensus on the failures of the global war on drugs").

Before long, it will be used directly in political debates to try to force some scientific organizations to choose a side. And then all credibility is lost.

Researching policies that require deep analysis of scientific (or other) data should be left to think tanks or something similar.

watertom 1 hour ago 0 replies      

Why should we funnel money to illegal drug cartels to fuel their crime and violence by decriminalizing drugs?

leksak 1 hour ago 0 replies      
> decriminalization of all nonviolent drug use and possession

What is meant by nonviolent drug use?

caxistic 2 hours ago 2 replies      
Needs to go even further, and deregulate access to prescription drugs as well, to open up provider competition and increase consumer choice.
pcurve 2 hours ago 1 reply      
I think this can cause interest shift in relevant job sectors.
palad1n 2 hours ago 2 replies      
First interview with CEO of GoCardless after paralyzing accident techcrunch.com
93 points by robbiet480  5 hours ago   23 comments top 4
andy-wu 3 hours ago 0 replies      
This is really incredible. I'm a quadriplegic student and you don't really see people in my position in tech, and definitely not in executive roles so this means a lot in terms of proving it's possible.
stretchwithme 4 hours ago 6 replies      
I think we need to start designing our cities so that people walk and cycle on a level above the street, sort of like the Highline in NYC. And cars and trucks operate below that without pedestrians.

That would make cycling a lot safer. Expensive to do for existing cities, but robotic construction may make it cheap enough to do in a few decades.

dvdhnt 2 hours ago 0 replies      
> Delivered without sugar-coating and designed to manage expectations, the prognosis from doctors wasnt good. The aim of surgery was to straighten Takeuchis spine and fix his posture, not to enable him to walk again. If during the procedure the doctors saw additional work that could be done, they would do it, but he shouldnt allow himself to think that he would wake up and be walking.

Wow. We are a community of people who enjoy the ability to dream and ask, "what if?" It's very difficult to turn that off and accept that the impossible is, in fact, impossible. That kind of realization can be, and almost would certainly be to me, defeating.

Not for Hiroki - this guy is just wired differently.

I wish him the best.

jimbokun 2 hours ago 0 replies      
Interesting how work can bea meaningful distraction after a crisis. Something meaningful to think about other than adjusting to life in a wheelchair, in this case.
Equifax Suffered a Hack Almost Five Months Earlier Than the Date It Disclosed bloomberg.com
375 points by QUFB  8 hours ago   53 comments top 11
orf 7 hours ago 5 replies      
> One possible explanation, according to several veteran security experts consulted by Bloomberg, is that the investigation didnt uncover evidence that data was accessed. Most data breach disclosure laws kick in only once theres evidence that sensitive personal identifying information like social security numbers and birth dates have been taken.

There was one company (very well known) I know of that was breached, but their logging and general security infrastructure was so poor that they had no direct evidence that customer info was breached, so they didn't have to report the hack. They only found the intrusion due to excessive load the intruders caused on some services.

Customer info was certainly accessed (the attackers where everywhere), it's just there was no record of it as the records they kept where so few and far between.

Part of me thinks it's a pretty clever workaround to such laws.

poulsbohemian 6 hours ago 2 replies      
I do performance / app triage work, but see the same thing. Often I walk in to a supposed "emergency" only to discover the problem has been occurring for months, if not years. Often, there is a significant cost (IE: in the millions) but either the organization isn't willing to remediate, or isn't even aware of the full scope of the cost (IE: "It's not my budget so I don't care"). In at least one case, I came across a security problem where the response was "oh yeah, we've known about that for years". Sigh. Sadly, too often unless companies have a very large customer who gets angry with them, or they are publicly shamed for a problem, they just let it magically go.
addicted 1 hour ago 0 replies      
Here's a question. What if based on this hack and others, someone decides to publicly post all the details they are aware of. So basically I can go on a website and look up all the hacked SSNs and the person and their information associated with that. How will the US cope with that?

The reason I ask that question is that it's definitely not gonna happen. But it's arguably a lot better than the situation right now where we have a few malicious actors who do have that information. If the data was completely public I feel you'd have a huge effort to fix the problem bwcahwe your neighbor can look up your credit worthiness. Yet I think the situation right now is worse because we won't have that effort to fix the problem yet 95% of the people who would have caused you problems have that data.

rotrux 7 hours ago 2 replies      
IMPORTANT: The date of disclosure is ALSO the date that demand for hacked data explodes.

It's good practice to have a staged-disclosure procedure for leaks of this nature.

For example: your bank should be told to start fine-tuning its anti-fraud capabilities BEFORE the entire world is made aware that you can be defrauded in this particular manner.

cft 8 hours ago 3 replies      
I think it's good news- if your identity has not been used, there's less chance that it will be, because the data has been already out for that long.
ams6110 7 hours ago 3 replies      
If they had an outside security firm helping them starting in March, and another breach in July, that doesn't say much for the capabilities and competency of the security firm.
kylehotchkiss 8 hours ago 2 replies      
I'm sure this looks good for the insider trading news.
dogma1138 8 hours ago 0 replies      
The timeframe isn't unusuall unless they've also have not reported it to major stakeholders and regulatory bodies.

That said this might absolve them of some responsibility if 5 months ago that vulnerability wasn't disclosed to major companies.

ArchReaper 8 hours ago 1 reply      
I thought this was already known? Is this a new revelation?
icedchai 7 hours ago 0 replies      
Sounds like good news for my short position.
Exuma 8 hours ago 0 replies      
The plot thickens.
Oculus Medium Under the Hood oculus.com
60 points by mynameised  5 hours ago   2 comments top
biocomputation 1 hour ago 1 reply      
Neat, but I don't think this is going to help them sell more hardware.
Introducing Keybase Teams keybase.io
319 points by jashkenas  13 hours ago   93 comments top 21
malgorithms 12 hours ago 10 replies      
Blog author here. In the interest of keeping the post more of a summary, we left the cryptographic details to our docs. Here's a link to that: https://keybase.io/docs/teams/index . We're happy to answer questions here.

This really is an exciting product for us. Once you can define a team, cryptographically, without server trust, a lot of other things follow. We'll be launching some of those things in the coming weeks.

Also left out from the blog post: teams (and chats) can be controlled through a local API, so pretty much everything in Keybase can run in the form of a bot, also without trusting Keybase servers. Cheers to crypto!

segeda 10 minutes ago 0 replies      
Any idea how make simple onboarding for public team?Something like https://github.com/rauchg/slackin for Slack.Thanks
e12e 10 hours ago 2 replies      
> But Keybase teamwork is end-to-end encrypted, which means you don't have to worry about server hacks. Alternatively, you can lie awake at night...fearing a breach of your company's messaging history. What if your team's history got stolen from Slack and leaked or published? The legal and emotional nightmare.

I think it's great that this is an e2e solution, and obviously the path for an attacker from total compromise of keybase, to total compromise of user data is one step removed - it needs to go via pushing a backdoored keybase client (or, ahem, a secret entity needs to force that to happen).

But either way, in terms of a targeted attack for a company's data - surely the clients (phones, workstations) are the weak point? In other words, I think the paragraph oversells the benefit of e2e a little bit?

This isn't really keybases' fault - consumer computing is pathetically insecure and hard to secure (with the possible exception of iOS devices). That's just how things are right now.

arosier 7 hours ago 0 replies      
Looks like they registered more than just a "few tech companies," I've gotten the following error message testing which names they might have blocked: "this name has been reserved by Keybase staff, perhaps for your team. Reach out to chis@keybase.io for more info." Some names tested: Ford, Nike, Safeway (though wholefoods was available- while "wholefoodsmarket" was reserved). Tech seemed pretty well covered: Google, Robinhood, ProtonMail, all throwing the above message.
xwvvvvwx 10 hours ago 2 replies      
So I think Keybase is an amazing product, and this seems great as well, but I get a bit stressed that everything is free.

I would happily pay for kbfs alone and I would hate to see them go under because they run out of money.

acobster 1 hour ago 0 replies      
Keybase is a truly exceptional product and this feature especially solves a big problem for my company. Thank you for all your work on this.
lettergram 9 hours ago 5 replies      
I like the service keybase provides, but I don't think it has any chance of taking off with the general public.

18 months or so ago I wrote an app called AnyCrypt, utilizing Keybase under the hood: http://lettergram.github.io/AnyCrypt/

The idea was to make it easier for my friends and I to send encrypted messages over any medium. Facebook Messanger, Slack, G chat / hangouts, etc. Facebook couldn't look into our messages, and everything was pretty easy. It took two clicks, which was a pain.. but because we were security conscious we could power through.

On the other hand, most people I work with, my family, other friends, etc. don't have the patience for that. It needs to be no clicks to at max one click. The current route keybase is continuing to take is a command line interface with multiple commands necessary.

The CLI is simply not going to catch on for the normal user. The experienced users just use their own PGP keys and manage it.

As for the new interface, it does look nice. However, the fact it needs to be a unique chat name is kind of a pain. Why not just sign a unique identifier to the chat, then rename locally? Similar to the way Signal does it? Also, what happens if you recent your PGP key?

ScottEvtuch 12 hours ago 2 replies      
I'm curious about the decision to make team names globally unique and unchangeable. It obviously has some trust benefits in that you can't spoof a team if you know the name, but shouldn't the proof of legitimacy be in the membership and signature chain, not the name?

If Keybase popularity grows, then any sufficiently large company will probably have to use "CompanyName-Corp" or something equally vague as their team name would be taken by squatters. A malicious user could invite someone to "CompanyName-Corporate" and most users probably wouldn't even notice.

koolba 11 hours ago 1 reply      
Very cool. Regarding team naming, is there anything preventing one from registering "google" or "google.com" as a team name?
dexterdog 9 hours ago 0 replies      
If I stay in a team chat for a long time including files and all of that do all of my devices keep a copy of everything said and worse every file posted?
hamandcheese 11 hours ago 0 replies      
I haven't had a chance to try the app yet, but I'm very curious: how will searching work? That's the only real thing I can imagine would prevent this from replacing slack.
wilg 12 hours ago 2 replies      
I'm getting this error upon trying to create a team:

> can't create a new team without having provisioned a per-user key

According to the docs (https://keybase.io/docs/teams/puk) it should have automatically been created for me, though? Not sure if relevant but I did not use the auto-updater to update Keybase for macOS (if there is one), I just downloaded the latest DMG.

bachmeier 6 hours ago 1 reply      
I've never heard of this company before. How do they make money if this is free?
Walkman 10 hours ago 0 replies      
I didn't see that coming; Keybase want to be Slack 2 :D
tanderson92 9 hours ago 0 replies      
Interesting, good to see that they've thought this one out a bit (many other unis too):

% keybase team create caltech

ERROR this name has been reserved by Keybase staff, perhaps for your team. Reach out to (email redacted) for more info. (code 2650)

moontear 11 hours ago 0 replies      
"Error this name has been reserved by keybase staff, contact chris@..."

Tried multiple generic names as well as some random companies. Might I ask what you are basing your "blacklist" on?

bruce_one 8 hours ago 0 replies      
Unrelated, but any interest in making the Android app available via a non-Play-Store medium?
fiatjaf 9 hours ago 0 replies      
Where and when do you get paid?
fiatjaf 8 hours ago 0 replies      
I don't know why this is being presented and commented about as a bad thing, but globally unique names for teams, just like for users, is very good.
adammenges 8 hours ago 0 replies      
Awesome work, thanks guys!
skybrian 9 hours ago 1 reply      
This looks like the start of a viable social network with very strong security.

At one point I would have thought that's great, but now I find it vaguely alarming. Maybe this is overly cynical, but I wonder what happens if it really starts to take off (say, growing towards Twitter scale). At what point does it become yet another social network that's degrading into a cesspool? And how do you get out of that state when everything is cryptographically locked down?

The lesson of Bitcoin seems to be that cryptographically irreversible actions combined with valuable digital assets (whether coins or, say, celebrity accounts) have a tendency to attract scammers and hackers, so you better be certain that your client machines can't be broken into and that you're immune to social engineering. And who is that certain?

I'll probably try it out, but I think undo buttons are really important. I'd be more comfortable if there were some trusted people who can roll back mistakes and fraud after they've been discovered and proven. (But, then again, how do we know they can be trusted?)

I guess this is all just FUD, but, hey, I'm feeling it.

Gut Germs Appear to Play Role in Multiple Sclerosis scientificamerican.com
189 points by how-about-this  11 hours ago   62 comments top 11
jobu 3 hours ago 0 replies      
This seems like the key thing:

There was another intriguing connection: Acinetobacter are molecular mimics of proteins found in myelin, the nerve cell coating that the immune system attacks in MS.

It sounds like a very similar situation to PANDAS and PANS: http://www.pandasnetwork.org/understanding-pandaspans/what-i...

Some bacteria have evolved to trick our immune system into ignoring them, but an overactive immune system will attack that bacteria and the human cells they're trying to mimic.

caublestone 6 hours ago 1 reply      
Bacteriophages lost out to antibiotics in the 1950s. Designing bacteriophages for specific microbes is fairly straight forward so you could design a probiotic loaded with good bacteria to be delivered with a bacteriophage against bad bacteria to quickly adjust the microbiome ecology towards a desired state. The nice thing is that bacteriophages are dietary supplements so you don't need drug approval (unless cure claims are made).
Mz 8 hours ago 1 reply      
the gut immune system has 70 80% of the body's immune cells.


It seems like a pretty obvious connection once you know that detail. If you have any kind of chronic medical condition, you would likely benefit from improving your gut health. (Hint: Dietary changes are a good place to start.)

Havoc 8 hours ago 1 reply      
Much of the auto-immune related diseases seem to be vaguely arriving at a similar conclusion lately.
vvpan 9 hours ago 1 reply      
While some searching might clear it up, the fact that the article doesn't even mention which type of MS they are talking about is a little vexing.
darkerside 7 hours ago 0 replies      
> Acinetobacter are molecular mimics of proteins found in myelin, the nerve cell coating that the immune system attacks in MS. That suggests the bacteria might trigger immune attacks that hit myelin, too, as when soldiers who inadvertently resemble the enemy get hit by friendly fire.

If this is true, could high dose antibiotics eliminate the gut bacteria and thereby halt the immune reaction associated with that type of MS?

emmelaich 5 hours ago 0 replies      
A good read:

 Gut, by Giulia Enders

Gatsky 5 hours ago 1 reply      
It is very easy to find correlations between phenotypic traits (getting MS, developing cancer, autism etc) and "your favourite biological variable". See John Ioannidis talking about microRNA studies for example. Part of the reason for this is that high throughput data like gene expression measures, gene methylation or microbiome sequencing gives you p << n data which invariably is also low dimensional ie a few factors explain most of thr variation. It is therefore easy to find a variable or a 'signature' which correlates with one of these explanatory factors and the phenotype, but doesn't tell you anything much about what is going on, or how to treat the disease. But it does allow thousands of papers to be published with pvalues <0.05.

Not to downplay this data, but caution is required not to over interpret the results, and avoid making the sane mistakes over and over again.

capkutay 5 hours ago 0 replies      
I wonder if there's a similar correlation between the gut and something like ALS.
basicplus2 5 hours ago 0 replies      
Makes me wonder if this could be related to leaky gut due to wheat protein and/or Glysophate
miguelrochefort 7 hours ago 2 replies      
Do we actually need gut bacteria? Do they serve other purposes that breaking down fibers?
The Joy of Sexagesimal Floating-Point Arithmetic scientificamerican.com
20 points by joubert  3 hours ago   2 comments top
jeffwass 1 hour ago 1 reply      
Below is a slightly-modified comment I wrote 59 days ago on benefits of Duodecimal (Base-12) over Decimal and Hexadecimal.

Sexagesimal takes the benefits of Duodecimal further by introducing a nice factor of five into the mix, though at the expense of 48 extra digits.

Base-12 is a nice sweet spot for having high number of divisors vs the number of digits in the base.


I was about to post that in my opinion base-12 is superior to base-10. But someone beat me to it. In a sci-fi novel I'm writing, an advanced alien civilisation uses base-12.

As to your question specifically regarding base-16 instead of base-12, it depends.

Decimal itself is just a bizarre choice, most likely due to humans having literally ten digits. In decimal we can represent exact fractions of 1/2, 1/5, and 1/10 (without repeated decimals like 0.33333 for 1/3). Counting by fives (and twos) is very easy.But choosing prime factors of 2 and 5 is a strange choice in itself. Why skip 3? Why is it more useful to easily represent fraction 1/5th as 0.2 instead of 1/3rd? How often do we use fifths?

Hexadecimal in one sense is easier, all prime factors are two. So we can represent 1/2, 1/4, 1/8, and 1/16 exactly.

Duodecimal (Base 12) is very convenient for having a high proportion of exact fractions. Eg - 1/12, 1/6, 1/4, 1/3, and 1/2 can all be represented exactly. I'd argue in everyday use we're more likely to consider 1/3rd of something than 1/5th.

Counting by twos, threes, fours, and sixes is easy. Watch, let's count to 20 (24 in decimal) by different amounts.

By 2's : 2, 4, 6, 8, A, 10, 12, 14, 16, 18, 1A, 20.

By 3's : 3, 6, 9, 10, 13, 16, 19, 20.

By 4's : 4, 8, 10, 14, 18, 20.

By 6's : 6, 10, 16, 20.

And conversely counting to 1 exactly by different fractions.

By 1/6th : 0.2, 0.4, 0.6, 0.8, 0.A, 1.0

By 1/4th : 0.3, 0.6, 0.9, 1.0

By 1/3rd : 0.4, 0.8, 1.0

By 1/2th : 0.6, 1.0

Base-12 offers four handy subdivisions (excluding 1) instead of two for decimal or three for hexadecimal. That beats hexadecimal using fewer unique digits. It beats decimal by two using only two extra unique digits.

And I think it's these reasons it was chosen for various historical subdivisional units (inches per foot, pence per shilling).

The other item to consider is the relative number of unique values per digit. I'm not sure of the utility of having 10, 12, or 16 here.

At one extreme, while binary is useful for discretising signals in digital logic, using only zeroes and ones becomes cumbersome for daily use at higher numbers.Once we're at base 10 and higher, I'm not sure how much here extra digits help or hurt.

The Future of HHVM hhvm.com
231 points by mwpmaybe  12 hours ago   110 comments top 22
TazeTSchnitzel 11 hours ago 5 replies      
Given HHVM is already being dropped from PHP packages because of its lagging compatibility, announcing that they're not targeting PHP compatibility any more might be the nail in the coffin for HHVM (and thus Hack) as a viable upgrade from PHP for existing codebases.

I mean, it's great that Hack will work for new Hack code and existing Hack codebases, but there aren't a lot of those. It makes sense for Facebook why waste your efforts on maintaining part of your runtime that you don't need? but I wonder if this will consign HHVM to irrelevance in the long term. Maybe Hack is a compelling platform for new code, but then, why use this obscure proprietary Facebook thing that's a bit better than PHP when you could use any of the numerous other languages out there that are also better than PHP but have much better ecosystems?

Personally this makes me sad because I wanted to see a standardised, multiple-implementation PHP language. Facebook did, even. They paid someone to write a spec: https://github.com/php/php-langspec

Maybe someone will write a new PHP implementation to take that idea forward. Or maybe we'll be stuck with Zend forever.

The future is strange.

muglug 11 hours ago 2 replies      
HHVM & Hack solved two big problems that made PHP difficult for Facebook and other large companies with large existing PHP codebases: Speed, and the lack of type checking

Now the PHP ecosystem is more mature PHP 7 eliminated the speed differences between HHVM and PHP, and a bunch of static analysis tools find 95% of the bugs that HHVM's typechecker finds.

It makes sense that this would be an inflection point for the future of HHVM.

I hope that more features from HHVM make it into PHP core especially property types and generics because, whatever FB decides to do with HHVM, PHP is here for the long-haul.

rrdharan 12 hours ago 6 replies      
This is fascinating. It's a well-written post and their plan makes sense to me, but I imagine there's a tough choice ahead for framework authors (the Laravels and Drupals of the world) about whether they want to fork their communities, stay with PHP7, or try to target both with the same codebase (in the near term or long term)?

At any rate at least the fact that the HHVM folks are communicating the strategy effectively and transparently should help everyone involved make reasonable decisions.

philippz 56 minutes ago 0 replies      
Sad. Facebooks involvement by utilizing PHP and pushing the language by extending it was a good sign for the PHP community. Would have loved to see that they align with PHP7 or even further, push their engineers into improving PHP itself. PHP has such a huge ecosystem. I wouldn't risk to bet on Hacks future.
bepotts 12 hours ago 4 replies      
How is Hack? Has anyone built anything with it and would like to share their thoughts? How's the HHVM community?

I've always thought that PHP was an underrated language that got a bad rep due to whacky design choices and PHP developers being seen as "less skilled" (a stereotype I know, but it is prominent) than others. Object Oriented PHP and frameworks like Laravel were a nice change of pace in my opinion, and there's plenty of good PHP coders out there if they had the right experience and stuck to a good coding guideline.

Alas, I confess I stepped away from PHP due the stereotypes against it, but HHVM always seemed promising. I haven't heard much about it over the years though.

What's the toolchain for HHVM?

ryangordon 9 hours ago 1 reply      
Here's the interesting thing about all this; HHVM will always be developed because it's important to Facebook's bottom line and Open Source because it only benefits them to keep it out there and have other people testing it and improving on it.

Now that they're getting rid of direct PHP support, HHVM is only going to get better. This will unlock a whole host of language improvements that HHVM couldn't otherwise make.

HHVM is faster relative to PHP now, and it will only get faster with these changes. Typing is an important part of making JITed code fast and unless PHP ever decides to fully add it, it will never have the potential to catch up. This is important to PHP-based companies as they grow and want to optimize on cost and development efficiency.

Undoubtedly, this split will be painful initially for those of us who are bought into the symbiosis of the HHVM and PHP ecosystem together. How painful it is to split will just be a question of where members of the PHP community want to go (or both). The nice thing is that converting something from PHP to HHVM isn't terribly hard; not anywhere near like converting from PHP to Golang. For HHVM, it's mostly just adding type annotations.

sunseb 10 hours ago 2 replies      
I'm excited! :)

PHP is IMHO the most productive and easiest platform for web development:

- a request

- a response

- templating

- no shared stated

And that's it! But the language syntax has so many quirks. So it's cool if Hack redesign the language and make it more beautiful and consistent. Many developers switch to Ruby or Python because these languages are better designed. I think Hack could attract a lot of these developers who want more beauty in the tools they use.

ankyth27 11 hours ago 1 reply      
Parse, react and now this. Why would I now learn any new Facebook tech?
maxpert 11 hours ago 0 replies      
I am way less sad about HHVM now (specially after React license debacle). I think Facebook now has opportunity to think about this fork as a fresh take on PHP and maybe make the language awesome both from syntax/performance perspective. I don't think living with a weird hybrid with current language landscape is an option.
dcgudeman 10 hours ago 3 replies      
I wonder what this means for wikipedia? Will they be migrating to a hack only stack now?
pbiggar 9 hours ago 2 replies      
> Eliminating references. PHP references have unusual semantics, where the function specifies the binding of parameters with no indication at the callsite.

I feel like I called this one in https://circleci.com/blog/critiquing-facebooks-new-php-spec:

> This is interesting because theyre changing the definition of the language through a sort of back-channel. Theyre allowing breaking changes by effectively deciding that other implementation choices are equally valid.

> Ill give you an example, which Ill get into more below. Theres a little known bug in the Zend engine around copying arrays that contain references. IBM wrote a paper about this bug in 2009. Basically, this bug was necessary in Zend to make copying arrays fast, and IBM figured out a way to do it in a way that was actually correct, for only a 10% performance penalty.

the_duke 11 hours ago 1 reply      
Has Hack actually gotten meaningful adoption outside of Facebook?

I never hear about anyone using it...

foxfired 8 hours ago 2 replies      
I think HHVM was equivalent to what jQuery was to JavaScript. jQuery forced JavaScript to be better, and the better JavaScript becomes, the less jQuery is needed.

So if we get to a point where HHVM is completely irrelevant, it simply means "Mission Accomplished".

krapp 7 hours ago 1 reply      
Well, this is disappointing. I really like Hack and I was hoping it would take off, but judging from this thread it seems unlikely the language is going anywhere worth following. I guess it's lucky that I only have one project written in it...that I now have to convert back to PHP.

I'm really going to miss XHP. Native XML support has ruined me for templating frameworks. I never want to write HTML as concatenated strings ever again.

royge 4 hours ago 0 replies      
If HHVM(Hack) will drop all the inconsistencies and weirdness of PHP in their implementation the better since it's no longer be compatible with PHP anyway.
ecesena 5 hours ago 0 replies      
What are the best framework for web dev on top of hack/hhvm? Last time I used PHP I was using yii (yes-it-is). Wondering what framework can someone use today if she has to start from scratch on hack.
tiffanyh 12 hours ago 1 reply      
This is pure syntax sugar but will Hack now clean up the PHP inconsistency with function naming and return values?

E.g. FunctionName() vs function_name().


E.g. return 5x20; vs return "5"x10;

crescentfresh 11 hours ago 0 replies      
First I'm hearing of Hack! Didn't realize HHVM had under it's wing two languages.
fiatjaf 8 hours ago 2 replies      
[removed language flame war]
ohdrat 11 hours ago 0 replies      
Guess I'll mosey on back to OCaml then...
merb 10 hours ago 0 replies      
I wonder why they just don't deprecate HHVM altogether and maybe create a HACK version on top of GraalVM. This would probably be way more performant and probably might be better for integration into other systems.
memracom 8 hours ago 3 replies      
I think that these changes mean the death knell for PHP in any version, for small companies. There is still a place for Hack or PHP7 in very large operations, but startups, and businesses that run at smaller scale, really should walk away from PHP entirely as soon as possible.

Two reasonable directions to choose are Python Three with a framework like Flask (lighweight) or Django (heavy duty). Or go to the JVM with something like Grails framework (heavy duty) on the Groovy language. Ratpack is a lightweight framework for Groovy and there is also an interesting option to use Vaadin 8 which lets you put your GUI code into the main app rather than writing separate Javascript code.

When making your decision, be sure to consider the huge JVM ecosystem that integrates quite easily with Groovy including development tools like Jenkins and SOAPUI that can be scripted with Groovy. And the Python side also has a fairly extensive ecosystem of libraries as well.

The skill level of Python and Java/Groovy developers tends to be higher than PHP which has always attracted people who would learn just enought to get by.

The software dev community has gone through an explosion of diversity in the past 2 decades and that has enabled a lot of experimentation with new ways of doing this. There is a lot of good in this. But now we are in a period of contraction. Some of this is manifested in the spread of functional capabilities via libraries such as reactive extensions and functional features being added to languages like Java and Javascript. Another manifestation is the fading of PERL from prominence, and this is now happening to PHP as well as Ruby.

This is evolution. Embrace it or face your personal extinction as a software developer.

Pitching your early-stage startup stripe.com
213 points by matthewhelm  13 hours ago   22 comments top 7
sebg 13 hours ago 0 replies      
Also worth checking out Patrick's tweet storm following his tweet about this new resource -> https://twitter.com/patio11/status/909800194509758464
ploggingdev 13 hours ago 2 replies      
Since the guide is partly focused on the YC application process, I have one thought (potentially misconception) that I would like others to weigh in on. For context : I'm working on a Disqus alternative with a focus on privacy, so no ads, no tracking scripts ( https://www.indiehackers.com/@ploggingdev/building-my-first-... ). I started working on it a little over two weeks ago and am a few days away from launching. So by the application deadline, I would have only onboarded beta users. Being a single founder who has been working on a product for less than 3 weeks, even if I follow all the advice and craft a well written YC application, I just don't see why YC would consider funding me instead of the numerous other applicants with serious revenue and something that might resemble product-market fit. In other words, I think when talking about crafting a YC application, it's important to discuss that there exists a certain baseline above which such guides really make sense. Sure, I could apply the actionable advice to my application, but will it move the needle at all when I'm a single founder with an MVP? On the other hand the only impressive part about the application might be that I built it in under 3 weeks and onboarded beta users. Thoughts?
lpolovets 11 hours ago 2 replies      
I'm a VC, and this list is great. At a high level, VCs care about three things: team, product/idea, and market. Every VC cares about all of these things, but their prioritizations vary.

Most of Patrick's excellent advice can be lumped into these three buckets. Specifically:

1) You have to establish the credibility of the team: you've done impressive things before; you have a deep understanding of what you're working on now; you can read your audience and know how to communicate effectively; you can get a strong intro (nice-to-have); etc.

2) You have to establish the viability of the market: it's big; it has a real problem; the existing competitors are not doing a good job in a clear way; etc.

3) You have to establish the quality of the idea/product: you have a unique insight or approach relative to competitors; the prototype/early validation is strong; etc.

A lot of the pitches become mediocre when founders are handwavy in one or more of these areas. For example, if the founder spends a lot of time talking about the market and the product idea, but not enough time explaining why the team is uniquely/extremely qualified to succeed. Or the founder has good answers to product/team/market questions, but their answers show they don't know how to read the audience or explain their idea. (Example of not reading the audience: the investor is non-technical and the founder, who is productizing their PhD thesis, spends 90% of the pitch geeking out about technical details.)

Also, I'll add a few tips:

- Don't exaggerate or mislead. An investor will pass if they doubt one of your statements ("silverware is a $150 billion dollar market!") or realize that you're spinning facts (e.g. you say Dropbox is a customer, but later it turns out you meant that one of your free users has an @dropbox.com email). If it turns out that one statement you made is false, then investors will assume there might be more.

- Understanding risks is better than sweeping them under the rug. If your competitor landscape is missing key companies (mentioned in Patrick's post) or you dismiss some $1b+ company as a competitor without any rationale, your audience will become very skeptical. Admitting something is a problem and explaining how you will address is it much more compelling.

- Really know the ins and outs of everything about your company -- at least relative to the audience. If I ask a question or make a product suggestion that the founder hasn't considered, that's a yellow flag. Someone who has been living and breathing their startup for several months should have a much, much deeper knowledge of their domain than an investor who is hearing about it for the first time.

softwareqrafter 10 hours ago 2 replies      
Great writeup, though I have to be completely honest here and say that I love Patrick's writeups for independent hackers, makers, micropreneurs, bootstrappers etc. His writings and practical case studies gave me the power, as a nobody, to make tens of thousands of dollars in order to be more with my wife and child, while doing the work I love. I kind of miss those essays.
Kiro 13 hours ago 3 replies      
> Do not cite gross merchandise volume (GMV) as revenue; if you facilitate a transaction between two parties and collect a fee then the total transaction is GMV but only your cut is revenue.

I thought revenue was a "protected" term, like how it's described in the books. In that case isn't GMV the same as revenue? Since that's the money you actually invoice. And your cut is "net revenue", profit or something instead.

ricokatayama 11 hours ago 0 replies      
Great stuff!

It isn't mindblowing, but insightful enough to take a look. "Focus on nascent greatness" is particularly a great section, because tries to solve some misconception about bizplan and ideas

graycat 2 hours ago 0 replies      
The OP is from Stripe, and their Stripe Atlas program seems to want to have a startup pay $500 and, thus, have Stripe get the startup a Delaware C-corporation.

Good grief: Why would a startup, prior to equity funding, want to be a Delaware C-corporation instead of just an LLC?

A student loan collector must halt collections nytimes.com
39 points by twunde  5 hours ago   14 comments top 2
greenyoda 4 hours ago 4 replies      
"Those borrowers had made payments after being sued over loans that were legally uncollectable, either because the statute of limitations had passed or because National Collegiate lacked the documentation needed to collect the debts in court."

This sounds a lot like what was happening during the financial crisis a decade ago. Lenders were selling mortgage loans to Wall Street to be bundled into collateralized debt obligations. After those CDOs got sold a few times, people lost track of who actually owned a particular loan. When borrowers defaulted, it was in many cases impossible to prove who was legally entitled to collect on the debt.

(A line in a spreadsheet that says John Doe owes you $500K is not very convincing evidence to a judge - you need to be able to provide a document with John Doe's signature on it.)

teekert 4 minutes ago 0 replies      
It's mind-boggling how a nation is able to develop a system that forces its young to spend a large part of their life paying for the salaries of bank employees and managers, simply because they want an education. It's even more mind-boggling that these students can then get rid of this dept easiest by joining the system themselves, simply shuffling around money to make more money to pay off their dept.

And then Jamie Dimon calls Bitoin a ponzi scheme... Our entire economy is a ponzi scheme.

All of our money is created as dept, as such banks collect money on all of the money ever created. They create nothing of value for this. Why do we accept this?

Signals Moxie Marlinspike calls out Telegram founder Pavel Durov techcrunch.com
80 points by ianopolous  3 hours ago   21 comments top 7
rdtsc 2 hours ago 3 replies      
Last time I looked Telegram wasn't recommended by Moxie and a few other people. That was 2-3 years ago. What's the status now?

They came up with their own encryption protocol and they are not trained or known as cryptographers, that's a warning sign.

> [Durov] The encryption of Signal (=WhatsApp, FB) was funded by the US Government. I predict a backdoor will be found there within 5 years from now...

That's another red flag, needing to spread fud and lies about Signal. Another reason not to trust Telegram. Indicates that maybe their ethics and integrity are a bit too flexible.

> During our team's 1-week visit to the US last year we had two attempts to bribe our devs by US agencies + pressure on me from the FBI.

Some people might read it as "these guys are so good, FBI is begging to backdoor them". But it can also be read as FBI suspects they are ethically compromised and they have a chance of succeeding.

Fej 2 hours ago 0 replies      
How ironic, the founder of Telegram - Telegram! - calling into question Signal's crypto.

Telegram's crypto is a complete question mark. I wouldn't be surprised if it's backdoored by Russian intelligence.

jabot 36 minutes ago 1 reply      
Originally, I didn't install signal because i have a google-free android smartphone, and signal depends/depended HARD on the play framework, even though that's not necessary. [1]

> Marlinspike reiterated that the whole point of end-to-end encryption is that users no longer need to trust anyone if the protocol works and Signal does.

But it depended on the google play framework - and I don't trust google. So where does that leave me?

[1]: Look at the "conversations" app. Yes, it is for XMPP, which is old and uncool - but it (a) doesn't use the play framework either and (b) uses very little battery on my phone, despite holding an open connection most of the time. That IMO proves that depending on the play framework is unnecessary in this case.

leksak 1 hour ago 1 reply      
It's a struggle getting non-privacy minded people to change to Signal.
dijit 50 minutes ago 1 reply      
Why is anything against Telegram FUD but anything against Signal is clear sailing.

I'm dubious about Signal, it has the same issues as most other encrypted communication channels (metadata leakage) but surely competition in this space is good.

Being "owned" by Facebook for me is a large red flag, as Facebook have an incentive to gobble up data- not saying the same is not true for telegram either- the only messenger I actually trust is iMessage but that's purely for reasons of: "Apple has no incentive at all to snoop".

It's not FUD to criticise Signal.

It's not FUD to question Telegrams crypto.

Holding Moxie and Durov to account for releasing servers that can actually be used would be a great help in being able to independently assess their claims. And even then, I might still err on the side of Durov purely for the fact that after doing what he did (telling the Russian government they couldn't do anything to Telegram) he fled the country, lost most of his fortune, his company. Etc.

pjs_ 1 hour ago 0 replies      
Moxie owns. He went in IMHO.
dingo_bat 2 hours ago 5 replies      
Is it possible to compile Signal from source and run it on Android yet? If not, Moxie needs a bit of "logic" for himself.
Central bank cryptocurrencies bis.org
44 points by prostoalex  7 hours ago   11 comments top 3
trowway21 5 hours ago 1 reply      
Lol. So back crypto $ with trillions of $ of debt going back to the Louisiana purchase; kept alive by a pyramid scheme dependant on a certain and limitless supply of debt free immigrants, after they continue to prove they don't require anything concrete to generate demand & subsequently value?

Aka: how interested are people in a cryptocurrency worth market value minus 20 trillion $?

meri_dian 3 hours ago 1 reply      
Very interesting. I'm intrigued by Fedcoin and how it incorporates the possibility of monetary policy into the cryptocurrency framework. The rigidity and hard ceiling on liquidity of Bitcoin is a major weakness of the currency (I know that Bitcoin enthusiasts see it as a strength) but if technology like Fedcoin can overcome that weakness I'm more bullish about cryptocurrency becoming part of the monetary scheme.
Findeton 1 hour ago 0 replies      
Perhaps you should add https://fiatcoin.net to the list.
Currying vs. Partial Application datchley.name
14 points by tosh  4 hours ago   6 comments top 3
tommikaikkonen 2 hours ago 1 reply      
Currying in JavaScript is really nice if you can remember the arities of each curried function you're using. If you forget to call up to the last argument, you'll be passing a curried function instead of the expected value, and it can throw an error really far away from the bug. And instead of normal data to inspect at the error site to point you to the bad call, you'll only have a generic function name to look at. I've spent more hours than I'd like to admit debugging these situations.

A type system would detect these cases, but TypeScript and Flow don't have great support for curried functions. Typing them is very verbose.

tengbretson 2 hours ago 1 reply      
I love how the new arrow syntax makes curried functions incredibly simple.

 const fn = a => b => c => { return a + b + c; };

javajosh 1 hour ago 0 replies      
I wrote my own curry using the pareto principle - this works for 80% of my use cases (or more):

 const curry = (fn,b) => (a) => fn(a,b);

Show HN: Doogie A Chromium-Based Browser with Tree-Style Pages cretz.github.io
229 points by kodablah  14 hours ago   67 comments top 20
gmurphy 13 hours ago 3 replies      
This is a great use of Chromium!

Storytime! When we started designing Chrome back in 2006, and despite having spent the past year working on Google's Firefox team, my favorite browser was iRider, which was an IE shell with tree-style-tabs. It was too power-user for what we were going for, but some of the concepts (pinning) live on in Chrome today.

mcintyre1994 12 hours ago 1 reply      
Bubbles sound like a really great feature! I'll be waiting on a Mac version too so maybe this already exists - but it'd be awesome to be able to pin domains to bubbles so they always open in that bubble. Then Facebook can have their own bubble and can't see anything even if their scripts/buttons get through ublock.
rprime 12 hours ago 0 replies      
I remember a few years ago when Chrome actually had a hidden feature flag that let you enable side tabs, not fully tree style, but it was better than nothing. Sadly they removed it.
sengork 7 hours ago 0 replies      
Hierarchical web browsing navigation is something that's been around in NetSurf for a while, more browsers need it:



jdc0589 10 hours ago 1 reply      
I switched to Firefox (dev edition) a month or so ago almost completely becuase the TreeStyleTabs extension is better than anything for chrome.

I LOVE having nested tabs. It totally prevents me from getting lost when I'm links deep in technical documentation.

mvdwoord 12 hours ago 0 replies      
This looks very promising, I'll keep an eye out for macos version, and in the mean time might play around in my VMs. In any case, thanks and godspeed to you.

Currently juggling between FF Nightly (for speed and memory usage) but lacking critical add-ons (LastPass), and Chrome(/ium). I tried Vivaldi for a bit but was really disappointed with the UX after a while. I miss the golden era of Opera.

CharlesW 13 hours ago 1 reply      
I love this way of managing tabs and windows! Today I approximate this by using Chrome and the under-appreciated Tabs Outliner[1] side-by-side.

[1] https://chrome.google.com/webstore/detail/tabs-outliner/eggk...

0xCMP 13 hours ago 4 replies      
I wish someone would create an Emacs buffer style browser where we could easily switch between tabs similar to how Emacs lets you switch buffers. I know you can do this in Emacs, but I just think for all the work people do to get tabs working nicely, maybe the way Emacs has done it actually works pretty well without overwhelming the screen with so much information all the time.

That tree on the left would be better in some kind fuzzy search screen that appears on a key-binding (e.g. like some setups in Emacs) and otherwise be hidden so the full screen can be used for the page. Either as a panel like it is now or a modal/window.

jpfed 13 hours ago 1 reply      
I love everything about this except that it's not done right now. This is excellent.
gwenzek 10 hours ago 0 replies      
It's funny how the feature description is exactly my firefox setup.Tree style tab, mouse gestures to close and switch tabs, ability to suspend tabs
moe 9 hours ago 1 reply      
Great work!

I never understood why this isn't the default.

Hopefully this will finally gain traction in mainlineChrome. But I'm not holding my breath...

heywire 12 hours ago 1 reply      
Completely unrelated, but I just had to comment that I love the style of this webpage. Those double-line borders take me back to my days of writing toy programs in Clipper.
skdjksjdksjdk 7 hours ago 0 replies      
Does anyone here make money through these browsers? Either by selling ads or by some other means.I have a few ideas in this space, but I am not sure about monetization strategies.
cjbillington 13 hours ago 1 reply      
Cool idea. Can't run on the latest Ubuntu though, it seems the latest Qt in the Ubuntu repos is 5.7?

 $ ./doogie ./doogie: /usr/lib/x86_64-linux-gnu/libQt5Core.so.5: version `Qt_5.9' not found (required by ./doogie)
Or is it supposed to use the Qt shared objects in the current directory? It seems to find the system ones first.

billconan 12 hours ago 1 reply      
So can this be implemented as a chrome extension? instead of a stand alone browser ?
Meph504 4 hours ago 0 replies      
Can't think Doogie without thinking of Lil'Doogie


mateuszf 13 hours ago 3 replies      
Looks great!I'll wait for the OSX version. Also - vim or emacs shortcuts would be appreciated.
Cacti 11 hours ago 0 replies      
No OSX support? :( :(
bradhe 13 hours ago 4 replies      
What problem is this solving exactly?
agumonkey 12 hours ago 0 replies      
Ho the website made me think it was ascii UI .. too bad. Still great, the page tree is useful right away.
Implement Tcl in Tcl tcl.tk
40 points by blacksqr  8 hours ago   9 comments top 5
fithisux 1 hour ago 0 replies      
Personally, I would like to see a lisp replicating tcl functionality, in other words an alternative surface syntax that could reuse existing pure tcl libraries or at least be able to transpile them to this lisp.
coliveira 2 hours ago 0 replies      
For anyone who learned Prolog, implementing Prolog in Prolog is a classic exercise in the language which can be one in a dozen lines. The goal is to be able to modify the language to handle additional syntax that is problem-specific (think about a macro system that can implement in its own syntax).
asperous 3 hours ago 0 replies      
If anyone is interested in this and haven't heard about it already there's also Python implemented in Python:


stevefan1999 5 hours ago 1 reply      
A Tcl in Tcl. A Tcl-ception.

Is this is also a kind of compiler bootstrapping?

auntienomen 4 hours ago 1 reply      
Why stop there? Implement Tcl in Tcl in Tcl.
Toys R Us Plans Bankruptcy Filing Amid Debt Struggle bloomberg.com
217 points by rayuela  10 hours ago   212 comments top 26
product50 9 hours ago 11 replies      
Though this may have something to do with Amazon (and retail generally trending online), it does appear that the big culprit here were Bain Capital (and its partners) who took it private in 2005 via leverage buyout and ladened Toys r Us balance sheet with untenable debt from thereon. Apparently, they were paying $500M in interests alone per year vs. reinvesting capital in growth of their offline stores or online retail.

I personally feel that, similar to bookstores which as seen a resurgence in neighborhood mom & pop stores, while Amazon will continue to dominate, a lot of smaller toy retailers (both online and offline) will spring up to take Toys r Us' space.

dforrestwilson 10 hours ago 4 replies      
Take the same building, reduce the space dedicated to toy sales, open up a series of indoor playgrounds and gymnasiums. Heck maybe carve out a daycare facility.

Maybe some sort of toy lab for kids to try before they buy.

Refocus the concept on being a real place for kids to grow and I think you'd see customers coming in the door again.

smaili 9 hours ago 6 replies      
It really is unfortunate that stores I grew up with as a kid, ones like KB Toys, Montgomery Wards, Service Merchandise, Blockbuster, Kmart/Sears, and now even Toys"R"Us have gone by the wayside for one reason or another.

It both depresses me and blows my mind that future generations may not even be able to experience Brick and Mortar.

dadrian 7 hours ago 7 replies      
The CEO of Toys 'R' Us as of 2015 is Dave Brandon. He was the former CEO of Dominoes Pizza. After he left Dominoes in 2009 is when they started the "We used to suck, we don't anymore, please come try us again!" advertising campaign.

After Dominoes, Dave Brandon was the athletic director at the University of Michigan. During his time as AD, he managed to alienate the majority of the fan base, hire a coach who got worse over the course of four years and who ended with a losing record and who gave up the title of "winningest program" to Notre Dame, violate FOIA law, repeatedly mishandle PR situations (including the time where it appeared that a QB with a concussion got put back in the game), and run up the athletic department debt. He got fired on Halloween 2014, after slightly over four years as AD.

Color me not surprised Toy 'R' Us went under.

marsrover 9 hours ago 0 replies      
I remember being young and looking forward to every year near Christmas my mom and Grandma would take me to Toys 'R' Us and pick out my present.

Feels weird to hear they're going bankrupt. Understandable, though. Huge stores with random assortments of product (non-essential product, unlike toilet paper et al.) aren't cost effective or convenient in comparison to something like Amazon.

nielsbot 9 hours ago 0 replies      
Bain Capital strikes again?
otoburb 9 hours ago 1 reply      
Does this mean that the "R Us" trademark suits will cease[1]?

[1] https://www.lexisnexis.com/legalnewsroom/intellectual-proper...

andreygrehov 4 hours ago 1 reply      
I'm not surprised. They sell pure garbage. Every single time I go there to get a toy for my kid, I can't pick anything. I end up spending an hour walking around choosing the best of the worst and keep repeating to to myself something like: "Where does this shitty store get the money to pay the rent?"
kemiller 9 hours ago 6 replies      
Can someone explain why it would be loaded up with debt on going private?
vonkale 7 hours ago 0 replies      
From SEC (shortened):


| In millions | 2017 | 2016 | 2015 |


| Net Sales | 11,540 | 11,802 | 12,361 |

| COGS | -7,432 | -7,576 | -7,931 |

| Operating Earn. | -460 | -378 | -191 |

| Interests | -455 | -426 | -447 |

| Total loss | -6 | -156 | -452 |


1. Their revenue is quite stable but decreasing2. Their EBIT is positive and increasing3. Their operating cashflow is negative but increasing4. So they seem to be running out of cash...5. They cannot lend more with such bad terms eg. 12% interests.

IMTDb 6 hours ago 0 replies      
"Ratings agencies have rushed to cut their credit ratings on Toys R Us to reflect the sinking market sentiment...[S&P] had the retailer rated B- just two weeks ago, and Moodys Investors Service still has a B3 rating and stable outlook for the name."

Ratings agencies once again showing how efficient they are at actually rating debt. B3 and stable outlook for a company that has actually field for bankruptcy, you gotta be kidding me.

runesoerensen 3 hours ago 0 replies      
slantedview 8 hours ago 0 replies      
Leveraged buyouts take down another victim. As always, it's shocking that these are legal.
tempestn 7 hours ago 1 reply      
> With speculation of a bankruptcy mounting, shares of Toys R Uss vendors tumbled on Monday. Mattel Inc., the maker of Barbie and Fisher-Price, fell 6.2 percent -- its worst decline in seven weeks. Shares of Hasbro, the company behind Monopoly, Nerf and Transformers, dropped 1.7 percent.

Does Toys R Us owe money to Mattel? Or are people thinking that if they go away or significantly reorganize people will just buy fewer toys of these types? On the surface a 6% drop for Mattel seems huge based on this news. If a grocery chain goes into bankruptcy, you're not going to see Coca Cola stock drop..

euske 6 hours ago 0 replies      
This episode of Planet Money (best podcast evah btw) kinda explains why retails like Toys`R'Us was defeated by Amazon.


Crontab 9 hours ago 4 replies      
I have come to the conclusion that it is very hard for a specialty businesses to survive in the era of Walmart and Amazon - which does not bode well long term for places like Barnes & Noble and Gamestop. I hope I am wrong.
Spooky23 9 hours ago 0 replies      
Too bad. Toys r Us has a great selection and good pricing. I found that they almost always beat Amazon on toys and eliminated the risk of counterfeit stuff, especially for things like Lego.
speg 9 hours ago 1 reply      
Aw man, I've just rediscovered this place with our 9 month old son. Was looking forward to going there as he grew up. Hopefully they can work out some sort of restructuring..
aklemm 6 hours ago 0 replies      
A toy store that isn't even a little magical to walk into is not long for this world.
marcell 8 hours ago 2 replies      
Toys 'R' Us might have been a great store when we (I) were kids, but I went there a few months back--what a wasteland. Barbie, GI Joe, and movie tie-in action figures as far as the I can see. Good riddance. People can buy crappy Angry Birds dolls on Amazon just as well as Toys 'R' Us.

I hope this opens up opportunities for superior brick & mortar toy stores.

kizer 9 hours ago 0 replies      
Damn. I was just in one like two weeks ago looking for a laser tag system. The prominent NERF gun display right by the entrance was glorious. NERF has progressed significantly and has also developed a new kind of spherical ammo in case you were wondering. I'll have to buy a few while they're on clearance.
krob 8 hours ago 0 replies      
I don't want to grow up, I want to be a Toys `R` Us kid......
javajosh 9 hours ago 2 replies      
Of all the things in the world, children's toys are THE most amenable to replacement by makers, tinkerers, and other people who want to do something in the real world.

(Alas, that's not what's happened. People just buy the cheap plastic crap from Amazon.)

yeukhon 6 hours ago 0 replies      
When they closed down its flagship at Time Square, I was expecting TRU not doing very well.
ars 9 hours ago 0 replies      
I guess I'm never getting my eToys gift certificate redeemed then :(
downrightmike 9 hours ago 0 replies      
Ah yes, another Bane Capital success story.
Keep your Slack distractions under control with Emacs endlessparentheses.com
59 points by zeveb  10 hours ago   4 comments top 4
_asummers 3 hours ago 0 replies      
I was rather displeased with the built in color scheme on a dark background, so I changed it to a nice purple and removed the underline:

 (set-face-attribute 'slack-message-output-header nil :foreground "#C080FF" :underline nil)
Binding slack-select-unread-rooms to something useful is a good choice. I have it bound to M-T.

If your users have avatars in their usernames, makes them go away in the source of slack-user-message.el in slack-message-sender with

 - (status (slack-user-status (slack-message-sender-id m) team)) + (status "")
I found people with stuff in their statuses made room names distracting, especially with group chats In slack-im.el, in slack-room-display-name,

 - (format "%s %s" - (slack-room-name room) - status)) + (format "%s" (slack-room-name room))
I am not good enough at Elisp to figure out how to make these hooks and I've been a bad open source person at opening an issue on these.

wslh 5 hours ago 0 replies      
I think all this, distraction aware apps, open a new killer feature that can bring healthness to teams. It is obvious that Slack omnipresence can get you mad but they can easily infer that you cannot attend every @channel or @your-name tag. In this case Slack can be a mediator and recommend abusers to slow down or bring specific analytics tools to executives to improve communication structures.
nunez 8 hours ago 0 replies      
There is also a Golang Slack client, slack-cli, that is quite good.
accidentalrebel 7 hours ago 0 replies      
You can also filter by severity (i.e. high, trivial, etc)
An efficient journal (2012) harvard.edu
20 points by jampekka  5 hours ago   1 comment top
dougmccune 2 hours ago 0 replies      
This is definitely a good example of how with the right conditions a quality journal can be run incredibly cheaply (on the order of < $10 per published article). And it looks like JMLR has a better impact factor (in the 2.4 range) than the Springer journal (IF ~1.8) that the editorial board resigned from [1]. So they're obviously doing it incredibly cheaper, with good results. I don't think you can extrapolate from this that the same model can be applied to every field (as the author acknowledges), but it's certainly a good example to try to emulate. Another would be Discrete Analysis [2] for an example in Mathematics (also a field well-suited for efficient publishing).

I know this submission comes on the heels of the discussion of scholarly publishing on HN a day or two ago [3]. It's certainly a good counterpoint to my previous argument that you can't run a journal super cheaply, although I'd argue that one or two cases in one or two fields don't prove you can scale such a system to work for all of academia. But it certainly shows that it's possible to do at a small scale, and maybe there's someone clever enough to figure out how to scale it up.

[1] http://www.springer.com/computer/ai/journal/10994[2] http://discreteanalysisjournal.com/[3] https://news.ycombinator.com/item?id=15265507

How to Read a Schematic rawhex.com
43 points by iuguy  7 hours ago   4 comments top 2
Animats 2 hours ago 1 reply      
That's a breakout board - mostly connectors, little functionality. Not too interesting to study. A simple blinky board with a 555 timer would be a better place to start.

Here's one of my schematics[1] with a detailed explanation of how it works [2] This is a small board which is used to power old Teletype machines. It's a mixed analog/digital board, with a custom switching power supply onboard to provide the high output voltage needed using only power from the USB port.

This gives some insight into why modern power supplies have so many parts. They work by creating big spikes, and they're always a few microseconds from a short circuit. So they need bypass capacitors and ferrite beads in the right places, and protection circuitry in case something fails. (MOSFETs tend to fail into the ON state.)

If you really want to learn this stuff, get "The Art of Electronics", by Horowitz and Hill.

[1] https://raw.githubusercontent.com/John-Nagle/ttyloopdriver/m...

[2] https://github.com/John-Nagle/ttyloopdriver/blob/master/READ...

netvarun 3 hours ago 1 reply      
One trick I learned at university while analyzing a circuit is to identify the various 'design patterns' in it - just like software engineering, circuit design also features a lot of recurring common patterns.

Here is a fantastic post on it:http://www.arachnidlabs.com/blog/2013/10/17/electronics-patt...

A decade after the crisis, The Fed plans to shrink its bond holdings wsj.com
60 points by jpelecanos  5 hours ago   40 comments top 5
joe_the_user 4 hours ago 2 replies      
Color me completely skeptical - not that it won't happen (though it might not) but that it will mean what the WSJ seems to say.

One of the things about the "winding" which started all this was that it involved bailing out institutions considered too-back-to-fail.

And that bailing out altogether meant that such institutions had the "full faith and credit of the US government", just like what's printed on dollar bills and these institutions could effectively print money more or less like the Fed.

So the Fed itself has said it will start selling it's particular portfolio, with the proviso that it will stop if anything seems to be going wrong. Which is to say that this is "privatizing the bubble" since the too-big-to-fail institutions can effectively "print" any money that leaves via the Fed's action. Obviously, there are micro ways that this will play out, this will change the color and emphasis of the "Greenspan Put" or it's many latter-day equivalents but those are small adjustments, not fundamental shifts - ie, we're still in bubble-nomics, well, unless the Fed really fumbles this.

module0000 4 hours ago 12 replies      
Not the usual story you see on HN... but it's still interesting - if the trading of US debt is the kind of thing that gets you going.

That said, it does get me going. What this story is telling you is:

1) Futures on US treasury bonds are going to fall, as the DOT unloads 42 million of them. This is a lot, and it will affect the value of your investments/401k in a non-trivial way.

2) Equities(and futures based on them, e.g. ES/NQ/DJIA) are going to rise, simply due to their inverse relationship with treasury bonds.

3) Commodities(such as oil, gasoline, natural gas) will increase in price.

4) Foreign interests will have a "fire sale" of US debt available to them for purchase. Whether or not they will buy it is anyone's guess. I'd say "if you know, you should tell us!", but we all know that won't happen.

What does this mean for the average investor? Short on bonds, long on equities. Your 401k or other managed investment portfolio is likely taking this approach on your behalf anyway(if they aren't, fire them and find one that does).

Edit: If you want to know what these particular financial instruments are and how they work(bonds and how they are traded), see here: https://www.cmegroup.com/education/files/understanding-treas...

sitharus 4 hours ago 0 replies      
CaliforniaKarl 4 hours ago 1 reply      
It seems to me that it's unusual to see something be cleaned up, without that something also being totally destroyed. I hope this goes well!
AWS announces per-second billing for EC2 instances techcrunch.com
267 points by jonny2112  12 hours ago   114 comments top 16
deafcalculus 54 minutes ago 3 replies      
I really wish AWS would allow users to cap billing. Something that freezes all AWS services if the monthly bill exceeds X would make me a lot more comfortable when experimenting with AWS.
zedpm 11 hours ago 4 replies      
That's sure nice, but I'm waiting for AWS to switch to automatic sustained use discounts [0] like GCP offers.

[0]: https://cloud.google.com/compute/docs/sustained-use-discount...

aidos 9 hours ago 0 replies      
This is one of the better things to happen in ec2 in years for me. We have a bunch of scripts so a spot instance can track when it came online and shut itself down effectively. It took far too much fiddling around to work around aws autoscale and get efficient billing with the per hour model. In the end we came up with a model where we protect the instances for scale in and then at the end of each hour, we have a cron that tries to shut all the worker services down, and if it can't it spins them all up again to run for another hour. If it can, then it shuts the machine down (which we have set on terminate to stop). The whole thing feels like a big kludge and for our workload we still have a load of wasted resources. We end up balancing not bringing up machines too fast during a spike against the long tail of wasted resource afterwards. This change by ec2 is going to make it all much easier.
gumby 9 hours ago 4 replies      
Back to the future: this was how computing worked back in the punch card days. Minicomputers and personal computers were supposed to liberate you from this tyranny: computing so cheap that you could have a whole computer to your self for a while!
nodesocket 10 hours ago 1 reply      
Per second billing is somewhat of a gimmick just so Amazon can say they are more granular than Google Compute. The difference between seconds and a minute of billing is fractions of a cent. Rounding errors.

The exception is Google Compute has a 10 minute minimum, so if you are creating machines and destroying them quickly, per second billing will be noticeable.

lostapathy 10 hours ago 1 reply      
This should enable some entirely new use cases, especially around CI and automation in general.

Per-second billing greatly reduces the overhead to bringing up an instance for a short task then killing it immediately - so I can do that. There's no need to build a buffer layer to add workers to a pool and leave them in the pool, so that you didn't end up paying for 30 hours of instance time to run 30, two-minute tasks within an hour.

JosephLark 11 hours ago 3 replies      
Likely due to GCP competition. I believe GCP was always per-second? [Edit: Misremember that, they were always per-minute. Lots of good information below directly form the related parties.]

Azure looks to be per-hour [Edit: Wrong again, they are per-minute as well. Oddly enough, I did check their pricing page before, but missed the per-minute paragraph and only saw the hourly pricing] but I'm seeing something about container instances possibly being per-second.

djhworld 11 hours ago 0 replies      
This is great news and a long time coming.

I really hope Amazon build something like Azure Container Instances [1], as per second billing would make this sort of thing feasible.

[1] https://azure.microsoft.com/en-us/services/container-instanc...

YokoZar 10 hours ago 2 replies      
I once considered writing an EC2 autoscaler that knew the exact timestamps of the instances so that it could avoid shutting down VMs that still had 59 minutes of "free" time left because they'd been up across another hour-long threshold. That sort of nonsense logic shouldn't be useful, but Amazon was giving a huge economic incentive for it.

This is certainly a long time coming.

rsynnott 7 hours ago 0 replies      
Ah, finally. They've ruined my idea for an optimal EMR job runner. Under the old system, if you have a linearly scalable Hadoop job, it's cheaper to, say, use 60 instances to do some work in an hour vs 50 instances to do the work in 70 minutes, assuming you're getting rid of the cluster once you're done. No more!
nogox 5 hours ago 1 reply      
I think the per-second billing is off the point. How does it help, if the EC2 instance takes tens of seconds to launch, and tens of seconds to bootstrap?

To make the most of per-second billing, the compute unit should be deployed within seconds, e.g. immutable. prebaked container. You launch containers on demand, and pay by seconds.

ttobbaybbob 9 hours ago 2 replies      
Interesting that the techcrunch link has thrice as many upvotes as the amazon link
andrewstuart 11 hours ago 3 replies      
Really welcome, although per millisecond would be better.

It's now possible to boot operating systems in milliseconds and have them carry out a task (for example respond to a web request) and disappear again. Trouble is the clouds (AWS, Google, Azure, Digital Ocean) do not have the ability to support such fast OS boot times. Per second billing is a step in the right direction but needs to go further to millisecond billing, and clouds need to support millisecond boot times.

SadWebDeveloper 11 hours ago 1 reply      
Serverless advocates/engies are probably the only people celebrating this, everyone else keeps waiting for self renew instance reservation... last time i forgot about them it was too late.
nunez 11 hours ago 0 replies      
This is great and will save a lot of people a good amount of money.
       cached 19 September 2017 07:02:02 GMT