hacker news with inline top comments    .. more ..    20 Jul 2017 Best
home   ask   best   8 months ago   
1
153k Ether Stolen in Parity Multi-Sig Attack etherscan.io
855 points by campbelltown  18 hours ago   604 comments top 60
1
int_19h 10 hours ago 11 replies      
Just skimming through the Solidity docs, I see a lot of unwise decisions there aside from the weird visibility defaults.

All state is mutable by default (this includes struct fields, array elements, and locals). Functions can mutate state by default. Both are overridable by explicit specifiers, much like C++ "const", but you have to remember to do so. Even then, the current implementation doesn't enforce this for functions.

Integers are fixed-size and wrap around, so it's possible to have overflow and underflow bugs. Granted, with 256 bits of precision by default that's harder to do than usual... but still pretty easy if you e.g. do arithmetic on two inputs.

Operators have different semantics depending on whether the operands are literals or not. For example, 1/2 is 0.5, but x/y for x==1 and y==2 is 0. Precision of the operation is also determined in this manner - literals are arbitrary-precision, other values are constrained by their types.

Copy is by reference or by value depending on where the operands are stored. This is implicit - the operation looks exactly the same in code, so unless you look at declarations, you don't know what it actually does. Because mutability is pervasive, this can can have far-reaching effects.

Map data type doesn't throw on non-existing keys, it just returns the default value.

The language has suffixes for literals to denote various units (e.g. "10 seconds" or "1000 ether"). This is purely syntactic sugar, however, and is not reflected in the type system in any way, so "10 second + 1000 ether" is valid code.

Statements allow, but do not require, braces around bodies. This means that dangling "else" is potentially an issue, as is anything else from the same class of bugs (such as the infamous Apple "goto fail" bug).

Functions can be called recursively with no special effort, but the stack size is rather limited, and it looks like there are no tail calls. So there's the whole class of bugs where recursion depth is defined by contract inputs.

Order of evaluation is not defined for expressions. This in a language that has value-returning mutating operators like ++!

Scoping rules are inherited from JS, meaning that you can declare variables inside blocks, but their scope is always the enclosing function. This is more of an annoyance than a real problem, because they don't have closures, which is where JS makes it very easy to shoot yourself in the foot with this approach to scoping.

2
earlz 18 hours ago 16 replies      
Here's the root error I believe: https://github.com/paritytech/parity/blob/master/js/src/cont...

The initWallet function should have been marked internal, but was instead not marked. Unmarked functions default to public in Solidity, so anyone can call that function and reinitialize the wallet to be under their control

3
finnh 18 hours ago 8 replies      
I've posted this before [0], but it's still apropos regarding the foolishness that is Ethereum.

[Ethereum] only makes sense if all of the following obtain:

(a) the code is 100% bug-free (b/c accidents cannot be rewound)

(b) all code-writers are 100% honest (their code does what they say)

(c) all contract participants are 100% perfect code readers (so as to not enter into fraudulent contracts)

(Strictly speaking, only one of (b) and (c) needs to be true).

None of these conditions will ever obtain.

[0] https://news.ycombinator.com/item?id=14471465

4
aresant 16 hours ago 2 replies      
From the post mortem (1) -=>

- A hacker managed to exploit a ICO multisig wallet vulnerability and drain 44,055 ETH - $9,119,385 at present.

- A white hat showed up and "saved" 377,000 ETH - $78,039,000 !!! - by draining other accounts.

I get the "see cryptos are too insecure / it's a pyramid / it's a bubble / ICOs are scams / etc" arguments.

But holy shit turning a world currency into the wild west - for better or worse - is going to be disruptive, period.

That $10m out the window is like a Series A for a nefarious hacker with deep crypto skills, what does this success embolden or create?

I can only imagine the debacles that we have to look forward to, and I say that in full support of and as a long term believer in both blockchain and cryptocurrencies.

(1) https://press.swarm.city/parity-multisig-wallet-exploit-hits...

5
doener 17 hours ago 1 reply      
"my favorite part of this latest ICO hack is that it appears to have gone to same wallet as the dao hack ....."

https://mobile.twitter.com/IamNomad/status/88777698177709261...

"incredible plot twist: whitehat hacker supposedly saved most tokens from being stolen using the same vuln."

https://mobile.twitter.com/bcrypt/status/887775417406431232?...

"Multisig wallets affected by this hack: - Edgeless Casino (@edgelessproject)- Swarm City (@swarmcitydapp)- ternity blockchain (@aetrnty)"

https://mobile.twitter.com/maraoz/status/887755889897295872?...

6
cl0rkster 18 hours ago 2 replies      
7
ericb 15 hours ago 1 reply      
As Charlie Lee said:

If the creator of Solidity, Gavin Wood, cannot write a secure multisig wallet in Solidity, pretty much confirms Ethereum is hacker paradise.

https://twitter.com/SatoshiLite/status/887781929726038016

8
ericfrederich 32 minutes ago 0 replies      
Is this even illegal? Or just frowned upon? It seems this is just one big game, you find the weakness and you profit.
9
o- 37 minutes ago 0 replies      
I believe from looking at the fix [0] I was able to trace back the origin of the bug. This is my (unverified) theory. Can anybody familiar with serpent confirm?

There is a catch-all [1] function in the public API (why???) of the wallet contract which uses delegatecall to delegate to the library class.

"In a similar way, the function delegatecall can be used: the difference is that only the code of the given address is used, all other aspects (storage, balance, ...) are taken from the current contract." [2] (again, WHY???)

So calling through this catch-all function the "internal" modifier on "initMultiowned" does apparently not prevent it from being called, since the delegation happens from a function inside Wallet.

So the "attack" is to just tell the wallet to reset its owners to myself. This would be so embarrassingly trivial, that it's more like picking the money up from the floor, than a "heist".

This wallet contract is insane and the programming language too. Why would a language for such a critical application have such super unsafe constructs? This can't be true. Please, serpent community, talk to your local PL people!

[0] https://github.com/paritytech/parity/pull/6103/files[1] https://github.com/paritytech/parity/blob/02d462e2636f1898df...[2] https://solidity.readthedocs.io/en/develop/types.html#addres...[3] https://github.com/paritytech/parity/blob/02d462e2636f1898df...

10
pietrofmaggi 8 hours ago 0 replies      
This is the most useful explanation I've found about the vulnerability so far: https://blog.zeppelin.solutions/on-the-parity-wallet-multisi...

The explanation is a bit scary about what actually ended up in parity code:

The wallet contract forwards all unmatched function calls to the library using delegate call... This causes all public functions from the library to be callable by anyone, including initWallet, which can change the contracts owners.

Edit: formatting

11
notsofastbuddy 18 hours ago 1 reply      
Parity shipped with a built-in Solidity contract to implement multi-sig wallets. That contract had a vulnerability that is now being exploited.

Importantly, the contract is not part of the Ethereum protocol, so other implementations and non-multi-sig Parity wallets are safe.

12
joshschreuder 17 hours ago 9 replies      
Let's play hypotheticals.

If you were the attacker and you now have the ETH in your wallet, how do you cash out without anyone identifying you and maximising your profits?

Also has the attacker broken a law by exploiting a bug in the contract?

13
redm 1 hour ago 1 reply      
I'm not sure why everyone is piling on Solidity. At the end of the day, bugs happen in all languages, to all programmers eventually, and if you want to point the finger, it has to be at Parity.

If anything, it shows there needs to be a better process for peer review and some defaults in Solidity should be changed for security.

14
sna1l 18 hours ago 1 reply      
https://etherscan.io/address/0x1dba1131000664b884a1ba2384641... -- white hat group exploited the vuln and are holding people's crypto for them.
15
matt_wulfeck 17 hours ago 2 replies      
I'm sure they'll just hard fork again. And nobody cares because ethereum isn't actually being used for anything real, just a bunch of enthusiasts trying to get rich.
16
okreallywtf 1 hour ago 0 replies      
In reading the comments I had forgotten what DSL stood for and had to look it up and it usually means something other than intended here, to save anyone else the trouble its Domain Specific Language.

https://en.wikipedia.org/wiki/Domain-specific_language

17
icelancer 17 hours ago 1 reply      
Black hat hackers nabbed $31MM in ETH. Not a bad payday due to a coding error.

https://etherscan.io/address/0xb3764761e297d6f121e79c32a6582...

18
djhworld 6 hours ago 0 replies      
On the parity website they state the following

> Every single line in our codebase is fully reviewed by at least one expert developer (and routinely two or more) before being placed in the main repository. We strive for excellence; static code checking is used on every compile to cut out bad idioms. Style is enforced before any alteration may be made to the main repository. Continuous integration guarantees our codebase always compiles and tests always pass.

19
lawrenceyan 17 hours ago 1 reply      
Silver lining: https://etherscan.io/address/0x1dba1131000664b884a1ba2384641...

Looks like about +300,000 ether was able to be drained before it could be stolen thanks to a white hat group.

20
nkrisc 16 hours ago 2 replies      
Just thinking hypothetically here as a coin novice: could a bug like this theoretically have been implemented intentionally? If the code is the law, and the code is sufficiently complex, couldn't it be feasible to dupe people?
21
dvcc 17 hours ago 2 replies      
Can someone explain how immutable contracts get updated? From what I understand you can have one contract forward requests to another, and you can use some storage in the forwarding contract to determine the real target contract. But why would someone participate in a contract that is mutable?

I guess I am just wondering how this contract can be updated, given its on the blockchain and considered immutable.

22
swamp40 17 hours ago 0 replies      
The begging in the comments section, along with their wallet ID's, looks like a glimpse of the internet 100 years into the future.
23
theptip 15 hours ago 0 replies      
Can someone explain to me why you would want a smart contract for multi-sig? This is a feature that can be implemented easily off-chain, i.e. using split keys (Bitcoin has had this approach for some time).

Seems like having this complex logic on-chain is asking for it to be exploited.

24
niahmiah 18 hours ago 4 replies      
Let me guess... another hard fork to undo this.
25
tudorw 16 hours ago 0 replies      
Entropy, not something you want from a currency, also, paper money is not magic, it's a network of trust. I think block chain applications are out there, I just don't think cryptographic currencies are their best use.
26
redm 13 hours ago 0 replies      
The blog announcement from Parity:

https://blog.parity.io/security-alert-high-2/

27
jondubois 17 hours ago 0 replies      
The problem with Ethereum is that it's just way too complex. The more complex something is, the more bugs and vulnerabilities there are going to be.
28
ericb 15 hours ago 0 replies      
No rollback this time. The chain with this hack must have the longer Proof-Of-Vitalik.

https://twitter.com/VitalikButerin/status/887782650026631168

29
rboyd 17 hours ago 0 replies      
you can see that this is also effecting tokens. check the whitehat effort (Token Transfers / View Token Balances) on this wallet https://etherscan.io/address/0x1dba1131000664b884a1ba2384641....

$30M worth of BAT, $26M ICONOMI, $17M CFI, $1.4M EOS

historic episode here which is sure to spur many a conversation about what disclosure means in the blockchain era.

30
e79 9 hours ago 0 replies      
The vulnerability was extremely simple, as suggested by the three keyword-long patch. I've written about this and other Solidity/EVM bugs from a technical perspective, if anybody is curious:

- https://ericrafaloff.com/parity-multi-sig-contract-vulnerabi...

- https://ericrafaloff.com/analyzing-the-erc20-short-address-a...

I think at least a big part of the solution to these security problems is two-fold:

- More secure conventions. All of the gotchas in Solidity make for a bad time. Even non-security bugs create a bad developer experience. Opting into private functions by default

- More code review. Engineers need to be diligent or hire security professionals who are (I'm one).

31
joeblau 3 hours ago 0 replies      
It's being put back: https://news.ycombinator.com/item?id=14811534

Edit: Without Vitalik or a hard fork.

32
jamespitts 16 hours ago 0 replies      
Helpful information for users potentially affected by this issue:

- The vulnerability is in Parity's "enhanced" multi-sig contract

- This affects Parity 1.5 and later

- Parity 1.5 was released on January 19, 2017 (have you created multi-sigs in Parity since then?)

- The canonical multi-sig contract used in Mist / Ethereum Wallet does NOT have this vulnerability

- 0x1db is a community "white hat" sweep effort and not an attacker (See: https://etherscan.io/address/0x1dba1131000664b884a1ba2384641... )

33
codewiz 17 hours ago 0 replies      
The bug in the wallet contract was fixed one hour ago with this commit: https://github.com/paritytech/parity/pull/6102/files/e06a1e8...

Parity bug: https://github.com/paritytech/parity/pull/6102

34
ericb 15 hours ago 0 replies      
Things like this are why I think Tezos, when/if it comes out, has a bright future. I want a formal proof for any contract I use with real-money.
35
kensey 15 hours ago 1 reply      
The great thing about reading this comment thread is that I basically already read it a couple of weeks ago, because a friend of mine (David Gerard, of Wikipedia, RationalWiki and Rocknerd Internet fame) let me preview his forthcoming e-book _Attack of the 50-Foot Blockchain_. There's a whole section in there about smart contracts, Ethereum, and The DAO that goes over much of what commenters here have mentioned ("non-reversibility, till it's our money at stake", the requirement that everyone write and read code perfectly, the problems with the very idea of immutability in contracts, etc.)

If people are interested, it's on Amazon: http://amzn.to/2trOjJS (I have no financial interest in it, but I bet a lot of people in this thread would enjoy reading it and/or writing long diatribes on why he is wrong about everything in it.)

36
coinme 14 hours ago 0 replies      
Better techniques are required. Solidity is clearly not ready to be used to secure billions of dollars that can be anonymously stolen in an instant. Fuzz testing should be an absolute minimum. Formal proofs, and a simpler language should be the ultimate goal.

Hopefully the ethereum foundation takes note because this problem is not going away, and they are responsible for 20B$ market cap of value. I realise that ethereum is still young but they have chosen to build a product that can be used in a multitude of ways without enough thought about how to keep the value secure. I wouldn't even know where to start when deciding whether it's safe to use a smart contract, and I understand the concepts well. If ethereum is ever going to grow into it's current market cap if will have to be safer for use by everybody.

37
likeclockwork 14 hours ago 0 replies      
If the code of the contract IS the contract, how was anything 'stolen'?
38
hohenheim 7 hours ago 1 reply      
I wonder, why the black hat didn't drain all the money and left it for the white hat group?
39
sparky_ 6 hours ago 0 replies      
Didn't they fork the project a while ago due to theft?
40
curiousgal 17 hours ago 0 replies      
Maybe it was a feature not a bug.
41
campbelltown 15 hours ago 0 replies      
It appears the hacker has begun moving ether from the account. The number presented in this link will no longer match the amount in the title. There is currently 83K ether remaining.
42
rocky1138 18 hours ago 1 reply      
How do we know this is stolen? The link doesn't provide much detail.
43
viach 9 hours ago 0 replies      
Looks like a good motivation to start learning Solidity.
45
codewiz 18 hours ago 2 replies      
Can someone ELI5?

I use Parity, I have a wallet contract deployed, it's night and I'm wearing sunglasses.

46
6nf 15 hours ago 0 replies      
Time for another hard fork!
47
mtgx 17 hours ago 1 reply      
So will the devs create another Ethereum fork to recover this money?
48
abhi3 18 hours ago 3 replies      
That's like 30 Million USD at current prices? This is close to the DAO hack in USD value, not another fork now surely?
49
ateevchopra 9 hours ago 0 replies      
77 Million were rescued by the white hackers and stored.

https://etherscan.io/address/0x1dba1131000664b884a1ba2384641...

50
qwertyuiop85 5 hours ago 0 replies      
0x2ee4899d44F086e8ee974399f404214de33F9b68Please donate, I'll go full time auditing code from now on. WHG member.
51
kevinwang 18 hours ago 1 reply      
Can anyone explain? Don't know what I'm looking at.
52
davidw 15 hours ago 1 reply      
I miss patio11's posts on these things.
53
imron 10 hours ago 0 replies      
Don't worry, they can just do another hard fork and get the money back, amirite?
54
thecrazyone 9 hours ago 0 replies      
the link seems to be down. Did we DDoS it ?
55
draw_down 17 hours ago 1 reply      
It's "cynical" to point out these problems will keep happening, but then they keep happening. So, not much to say.
56
tbarbugli 17 hours ago 2 replies      
how much money is that?
57
rjurney 14 hours ago 0 replies      
I can't even understand what you are all talking about. Crazy kids. I'm not even kidding. Usually I can figure out what the topic of conversation is if I'm not familiar with it, but in this case I'm like three degrees removed from comprehension.

Sounds like this is all probably dot com bullshit, but maybe something genius will come out of it that is unforeseen now.

58
samstave 18 hours ago 3 replies      
Forgive me for being harsh:

Why is there no "pen-test" phase to any crytocurrency which hits the market.

So, let me understand; you're ostensibly smart enough to (perhaps as a body of contributors, even) develop a cryptocurrency offering - yet youre also fucking stupid enough to not have same/wider network of ppl attempt to hack the fuck out of your plan?

Does this already occur? or some savant comes and owns them?

We have fucking HIPPA FFS and the compliance systems for something as trivial as my stupid name.

so; ELI5: WTF are currencies doing/not-doing which allow for such hacks (1) and allow for exploits to go unseen (2)

60
qwertyuiop85 5 hours ago 0 replies      
0x2ee4899d44F086e8ee974399f404214de33F9b68Please donate, I'm going full bug hunting from today on your behalf. WHG dev. S.
2
The Myth of Drug Expiration Dates propublica.org
603 points by danso  1 day ago   258 comments top 28
1
slr555 1 day ago 6 replies      
This issue is a conundrum. Most drugs don't fall off a cliff of efficacy when they reach their expiration date.

There are drugs such as tetracyclines that should never be used past their expiration dates because they degrade into toxic compounds. Certain classes of drugs such as anti-arrhythmics or drugs like warfarin are dosing critical to the point where I would not want them if they were out of date.

I worked in pharmaceuticals in a medically underserved community for a couple of years. At that time when drug samples expired, sales representatives had to return them to their companies for destruction. One doctor in the area made sure that all the drug reps knew he would accept short dated (but not outdated, which would have been against policy for reps) samples for a free clinic he ran. Everyone I knew participated when they had short dated samples. While reps could not distribute outdated samples, doctors had much more latitude in how they dealt with them. It was one of those rare and wonderful situations that was good for patients, created good will for reps and was all completely within regulations.

I should say this was some years ago and regulations may have changed since then.

2
harshreality 1 day ago 12 replies      
> The findings surprised both researchers: A dozen of the 14 compounds were still as potent as they were when they were manufactured, some at almost 100 percent of their labeled concentrations.

What kind of reporting is this? Anything less than 100% is not "as potent as when manufactured", and the sentence implies some of those dozen weren't close to 100%.

> The idea that drugs expire on specified dates goes back at least a half-century, when the FDA began requiring manufacturers to add this information to the label. The time limits allow the agency to ensure medications work safely and effectively for patients. To determine a new drugs shelf life, its maker zaps it with intense heat and soaks it with moisture to see how it degrades under stress. It also checks how it breaks down over time. The drug company then proposes an expiration date to the FDA, which reviews the data to ensure it supports the date and approves it. Despite the difference in drugs makeup, most expire after two or three years.

That seems to be the problem. There was a procedure in place to set expiration dates scientifically, and it was ignored for some reason, limiting the legal shelf life of even the most stable compounds to a few years.

3
jessriedel 1 day ago 5 replies      
This article has some shoddy logic. For the vast majority of drugs, the actual marginal manufacturing price per pill is essentially zero. That means that when the expired pills are thrown away and new pills are manufactured, there is no true economic loss. Rather, all that potentially happens is a transfer between the consumer and the manufacturer.

If pill prices were set by some external force, this could at least be an important society-wide transfer. But in fact, in equilibrium pill prices will be affected by the expected rate at which pills expire without being consumed. Even when manufactures have a monopoly, the manufacturer-surplus maximizing price is determined by the demand curve of the consumer which takes into account the expiry rate of pills. (If 10% of pills expire before I consume them, they are worth 10% less to me in expectation.)

Yes, I'm sure there are market model where expiry dates create net economic drag, or a net value transfer between manufacturer and consumer, but it's not even clear which direction the transfer goes. Such an analysis depends on the details of the world and how they are reflected in your model, which are completely absent in this article. Most importantly, the intuition that "letting $100 pills expire for no good reason must cause $100 of damage" is completely false when the marginal cost of manufacturing is low.

(There are exceptions where the marginal manufacturing process is expensive, but the article doesn't focus on these.)

4
hyperrail 1 day ago 1 reply      
> But neither Cantrell nor Dr. Cathleen Clancy, associate medical director of National Capital Poison Center, a nonprofit organization affiliated with the George Washington University Medical Center, had heard of anyone being harmed by any expired drugs. Cantrell says there has been no recorded instance of such harm in medical literature.

I am more than a little surprised by this. Fanconi syndrome, a kidney disease that can cause bone damage [1], has been repeatedly found in the medical literature to be caused by taking expired pills of the antibiotic tetracycline. [2]

That these people completely failed to remember these cases is distressing at best.

(I heard an interview with the author of this story on NPR Morning Edition today, and also scanned this story webpage. The only mention of tetracycline was by a web commenter on the story.)

[1] https://medlineplus.gov/ency/article/000333.htm

[2] https://scholar.google.com/scholar?hl=en&q=tetracycline+fanc...

5
mherdeg 1 day ago 0 replies      
I have a lot of respect for this kind of reporting from ProPublica. It's a really good public service to report news that isn't actually new.

In this case, part of the news is that the FDA's 1986 Shelf-Life Extension Program ( https://www.fda.gov/EmergencyPreparedness/Counterterrorism/M... ) has been working fine for decades. This is important for people to know and it's worth re-publishing from time to time as a reminder.

They had another great article like this last month, https://www.propublica.org/article/hundreds-of-judges-new-yo... , which basically reported "local courts in New York state had terrible problems according to an in-depth 2006 New York Times report and are still terrible in 2017". Great stuff.

Likewise a lot of the reporting on "civil forfeiture continues to happen and continues to be unfair, here are more examples" provides a valuable civic service.

I know it can be tough in a news organization to re-report something that everyone already knows is true. It can certainly be tempting to pass over truly important stuff in favor of seeking out brand-new news -- a fair amount of science reporting is driven by "what's in the journals this week" -- but this kind of long-term focus on "what's still true that needs your attention" is equally valuable and great work.

6
Reason077 1 day ago 5 replies      
"Berkowitz picks up a box of sodium bicarbonate, which is crucial for heart surgery and to treat certain overdoses. Its being rationed because theres so little available."

Huh? US hospitals have a shortage of Baking Soda?

7
gumby 1 day ago 2 replies      
(former pharmaceutical developer here). This article makes it sound like there's a big conspiracy, which there isn't.

A very few drugs (like erythromycin) become toxic so should be discarded.

Some drugs become less efficacious over time, but nobody really knows the shape of the curve as they are simply tested to see if they have the same efficacy on day E as they did on day 0. Well of course all drugs will become worthless as t approaches but you can guess that since tablets have a very low moisture content, if they are kept in a cool dark place it's likely they'll last a very long time. I also happily keep expired drugs in a controlled environment and use them; all drugs in my car's and backpack's first aid kits get replaced annually because they are exposed to harsh environments.

(Stockpiling drugs doesn't prove anything BTW: if you are stockpiling them against an emergency the presumption is that some efficacy is better than none).

Nobody is going to do accelerated life testing beyond what they have of course. I think extending the required lifetime is a good idea, though I question the size of the economic return claimed in the article.

The expiration dates on food are slightly more scandalous: the FDA doesn't require the same level of testing as they do on the medical side so they are mainly set stupidly short. Last week purchased some vacuum-packed lamb that had a manufacturer label with an expiration date a month away, but was prominently labeled to expire this week. And of course US egg producers take steps that reduce the storage time of eggs, which can be months old when you get them -- and then "expire" a week after getting home.

8
Steeeve 1 day ago 1 reply      
A silicon valley startup by the name of SIRUM (http://www.sirum.org/) has done some pretty incredible work in this space. They re-distribute medicine nearing it's expiration from hospitals/pharmacies/etc. with surplus to areas that need it and can use it before it expires.

I'm sure they do more - I only have a slim memory from hearing one of their founder's speak.

They do have a pretty incredible impact and deserve mention in this conversation - they've been attacking this problem from a "what can we do now" perspective for the last 5 or 6 years while others have been spending time debating what to do or if anything really needs to be done. They are good people.

9
Overtonwindow 1 day ago 1 reply      
Another facet to this is "Medication Adherence" which is defined generally as taking medication properly, but which the drug industry defines as taking all of your medication, and getting a refill for more. Speaking only as a lobbyist who has (regrettably) worked for a Pharmacy Benefit Manager in the past, there is tremendous amount of money and thought being put into how to get patients to take all of their medication. The belief is that this will lead to patients refiling that prescriptions, oh and probably help them overall, but refills are the number one priority for the drug industry.
10
pipio21 1 day ago 0 replies      
Most of the time the drug handles time well but the excipient degrades. This makes it very hard to control how those drugs will work. It opens so many variables.

It is like not using the seat belt could save your life if you are thrown away of the car at some specific circumstances, like happened to a person I know. But they are there because it works better most of the time as engineers could design safety with less variables.

My father had an emergency immune problem(that we have already identified as it is recursive) and the only required drugs in house were expired for some years. He took those before we bought the new ones and the old ones were like 1/3 potency of the new ones.

Given that most of the price of drugs come from intellectual property and patents and each pill cost dollar cents to make, I don't see the urgency of taking expired drugs.

If Hospitals trow away expired drugs then it is a good reason to take into account in the global negotiation process, and I bet they already do in countries that buy drugs in bulk.

11
korethr 1 day ago 0 replies      
I'm glad people are contacting the FDA to try to get them to extend drug expiration dates, but this article gives me the impression that after the FDA fails to reply, or replies non-committally one way or another, people give up. Have any of the efforts involved tried to move up the chain of command to Congress? The FDA might not seem to be willing to move on the issue, but if one could get one's senator or representative to care, that congressman probably has more means to get the FDA to act than the average person.
12
noonespecial 1 day ago 1 reply      
The cynic in me suspects that if this gains too much traction, drug manufacturers will simply start adding self destruct compounds to all of their formulations "for our safety".

In fact, some of them have track records of such "ethical" behavior, it wouldn't even surprise me if they turned them poisonous vs simply inactivating them.

13
unics 1 day ago 1 reply      
In Denver there is a program that supplies hospitals internationally with expired drugs and equipment that would otherwise be without. It's been going on for decades and saved many lives.
14
bcook 1 day ago 5 replies      
I always assumed that the expiration date was partly legal, in the way that you only have 1 year within which you can legally have the drug in your system.

I mean, if I am prescribed oxycontin, can I legally have the drug in my system for the rest of my life?

15
pitaa 1 day ago 0 replies      
I was once sedated for a procedure, and heard the Doctor ask for a certain type of needle. The Nurse replied that they were out; she had just thrown them all away since they were expired. The Dr asked "how expired", and she said "2 weeks". He replied that that was fine, he wanted them anyway. And then I heard the lid on the trash can opening...

I found it quite humorous; I couldn't care less that their supplies were a bit 'expired,' nor that the sealed packages had sat in the trash can for a bit!

16
barking 1 day ago 0 replies      
At one time we were required to keep an extensive list of emergency drugs at work.

Off the record, it was generally accepted that we should never, under any circumstances, use any of those drugs, bar adrenalin, for fear of doing more harm than good.

The cost was substantial and one company seemed to have the monopoly on supplying them. They also kept a record of the expiry dates and supplied replacements as stocks went out of date.

The most annoying thing was that they seemed to deliberately supply drugs with most of their lifespan expired.

They of course denied that this was a policy.

17
valuearb 1 day ago 0 replies      
Allowing drug makers to set expiry dates is a huge conflict of interest, they have massive incentives to expire their drugs as soon as possible. I'm surprised the FDA doesn't do the testing and set the dates itself.

The best solution is to require drug makers to replace expired drugs with new, for free. Given that manufacturing costs are typically a tiny fraction of sales price, this is not an expensive warranty. This will also give them incentive to make expiry dates as reasonable as possible.

18
lovemenot 1 day ago 1 reply      
Due to complexity it doesn't seem realistic to try to save money by using legacy drugs.

On the other hand, the economics of pharma industry are such that manufacturing cost is usually a very small proportion of price.

Therefore a better solution would be to introduce a mandatory new-for-old trade-in policy. So they wouldn't lose money on the deal, pharma companies would be rewarded at mfg cost. I.e. sans profit, marketing or research cost.

19
rectangletangle 1 day ago 1 reply      
Degraded tetracycline has been documented to be dangerous in those who are suffering from renal failure.

There's also the possibility that a person may be suffering from a new undiagnosed condition, which hadn't yet onset when they were initially prescribed the drug. In this case you could think of it less as an expiration for the drug, and more of a suggestion to seek additional medical guidance on continuing after a certain date.

There's also the remote possibility that a new drug is developed which could potentially become dangerous when expired. The general population would have to unlearn little morsels of knowledge such as this, in the meantime people could be harmed.

Edge cases like this are enough in my opinion to warrant not spreading blanket advice like this, even if it's nearly always true. Erring on the side of caution is the best approach with medical affairs, even if it costs a bit extra monetarily. Giving patients potentially dangerous advice so they can save money is ethically questionable at best.

20
devy 1 day ago 1 reply      
Read similar article on HN before. U.S. Army was able to save millions after FDA okays them to use expired drugs.[1]

[1]: https://www.thepharmaletter.com/article/fda-tests-let-milita...

21
djrogers 1 day ago 2 replies      
It seems that a law simply allowing private testing of drugs for efficacy, the same way the federal government is allowed to, would go a long way to saving money here.

Hospital systems and HMOs are large enough to save millions for a bit of testing, and once enough data is gathered they can easily and safely adjust their retention policies.

22
JshWright 1 day ago 0 replies      
The "extended use dates" that Pfizer published mean that we now have 'older' drugs with expiration dates that are past the expiration dates of newer batches. They didn't just say "add 1 year to all expiration dates" they said "for these specific lot #'s, the new expiration date is: XYZ".

They only published extended dates for some lots, since they expect more recent lots to still be in date by the time the shortage is addressed.

I may have spent the better part of a day a couple weekends ago relabeling a bunch of vials with a sharpie...

23
cwkoss 1 day ago 0 replies      
Seems like an easy fix would be to require the manufacturers of patented drugs to replace, free of charge, any unused expired drugs.

Incentivizes the manufacturers to accurately measure expiration times and the marginal cost to them of replacement is much less than to the consumer.

24
hemanthtt 1 day ago 0 replies      
Can't there a company which buys up all the "expired" drungs and get the appropriate approval from FDA and resell it for a cheaper price?
25
maxerickson 1 day ago 2 replies      
What's going on with our healthcare system that pharmaceutical grade baking soda is being rationed?

W T F.

26
samstave 1 day ago 0 replies      
I wonder if there are any that become more potent/dangerous after a period of time.
27
dmschulman 1 day ago 2 replies      
The myth is that those dates on the side of most drugs are expiraration dates.
28
coding123 1 day ago 1 reply      
Almost all of the drugs I have in my cabinet are expired, and all have been 100% potent as the day I got it when I needed to take it years later. I have known this for a long time. There is also a reason the expiration date on your prescriptions is exactly one year after you get it from the pharmacy, when those bottles are actually sitting in a different bottle for months and months at the pharmacy. The entire reason for a 1 year expiration date is money.
3
Things Ive Learned from Reading IndieHackers toomas.net
558 points by scribu  1 day ago   139 comments top 15
1
donmatito 1 day ago 2 replies      
What I find interesting with Indie Hackers is that it covers a wide range a personal/business situations. It goes from real lifestyle business, to beer-money-making side-projects.

I feel that there is a world of difference between a side-project that is free, to one where you ask customers for their credit cards. Of all professional experiences, I have never learnt as much as I did taking a side-project idea from idea to MVP, then to beta users, then to paying customers, scaling server issues, and marketing strategies. It's not so much about the money, than the fact that you learn so much on so many dimensions.

Sincere thanks + shameless plug, the interview about Smooz was fun, a good self-reflection exercise, and a good source of traffic too (https://www.indiehackers.com/businesses/smooz)

2
ThomPete 1 day ago 5 replies      
In other words. Reading about success to become successful is like reading the autobiography of lotterywinners. There are no secrets to success other than luck, timing and actually shipping your product (or play the lottery).

Contrary to the lottery though much fewer people ship and there are much more winning lottery coupons.

There is nothing to learn about how to build a successful product.

If you want to read anything read about specific obstacles you get into.

Great read!

3
bigtunacan 22 hours ago 2 replies      
The 4-Hour Workweek is the first book listed and I have heard of it so many times I finally decided to read it. I picked up an old copy from the local library and I'm a little over 1/2 way through.

So far I have not been all that impressed. Just in general it seems so light on anything really concrete and more just a motivational book. Where it does have concrete things though they are just really out of date (One example is that they recommend Yahoo Stores where today someone would probably use Shopify or another alternative. Another example is there is a LOT of talk about using magazine ads, but how many people are actually still reading/buying print mags today?).

What are other people's thoughts on this particular book, and what about the updates in the latest edition? Has it changed enough to be somewhat current when the realities of today?

4
rb808 1 day ago 5 replies      
Yeah this is the best for me:

> Ship. Ship. SHIP. The overwhelming failure case among people who read interviews like this one is that they spend 98 units of effort reading about running a business for every 2 units of effort running a business. Flip that on its head.

I wonder how many successful entrepreneurs actually spent time reading advice blogs when they were starting.

5
noxToken 1 day ago 9 replies      
I've always wondered what the advice would look like if you compiled a list of what not to do from failed companies. TFA is a list of the successful ones who got it right, so we know what they did to make it. Yet I'm sure many people follow this advice and still fail.

You can find wildly varying statistics, but somewhere between 50% and 90% of startups will fail within 2 years. What did those companies do (or failed to do) that made them close up shop?

6
konpikwastaken 21 hours ago 0 replies      
Heh, something I read earlier on indiehackers made me chuckle.

"Honestly, negativity about my business model is more likely to come from a community like Hacker News than it is from my readers."[1]

He's not wrong.

[1]: https://www.indiehackers.com/businesses/site-builder-report

7
superasn 12 hours ago 0 replies      
Very interesting post. I think the author can head his own advice and could have hosted it on a domain like startuptools.com or something, because this blog post is going to be backlinked heavily and with a touch of SEO this can be a regular source of targeted traffic.

Just add a pdf checklist at the bottom that needs an opt-in (anybody who reads this will have a very conversion rate) and soon you've got a small list. Pitch them the startup tools or whatever you think is best for the list (using affiliate links) and soon you have a site that is making a passive income. It's easier said than done, but creating a passive income is really that easy and if you're doing the effort you may as well reap the rewards.

8
zapperdapper 1 day ago 9 replies      
All good advice, but I'm going to propose something that may be a bit contrary to current wisdom: if you really want a decent lifestyle/work-life balance don't start a business!

So what's the alternative?

Go contracting. Reduce your expenses to the bone, and limit the contracts you do. I now only take 3 month contracts and I do one a year. I make about 24K from that - I know many developers making double my rate. That 24K is more than double my expenses though. If there's a 6 month contract I really like the look of I will do it (especially if it's remote working) - but then I'll take at least a year off.

The great thing about a contract is you go in, do your thing, get out. Job done. No stress. No worry. No customers giving you grief and wanting their money back. No infrastructure going down with admin alerts at 3.00am. No hassle. I don't even have my own limited company. I use an umbrella company and while not as tax efficient I have no dealing with accountants, HMRC, tax returns and all that nonsense.

I think there are many good reasons to start a 'lifestyle' business. I'm just not convinced it's the way to go if work-life balance is what you are looking for.

9
tmaly 23 hours ago 0 replies      
One other thing not mentioned in the article is the community on Indie Hackers. If you are working on a side project, the majority of the people in the forum are there to help you. They have given me great feedback and ideas for my project.
10
soneca 1 day ago 3 replies      
I believe this list is missing what I consider to be the most important factor for success, an underlying strength that the most successful business I read on IndieHackers (and elsewhere) have that is systematically underestimated as a reason for success: previously acquired audience.

I believe audience is the most important currency in today's world, especially for digital products. Audience may be followers on Twitter, Instagram, Youtube, etc. Audience may be an email list. Audience may be traditional networking (as if you are in a B2B niche business). If you do not have audience, you can buy one with ads (even so it is easier said than done); or you can borrow the audience from someone else, an influencer (I know there are ways to pay for influencer's audiences, but I do not believe it is very effective, to get some influencer to legitimately love your product and act as your referrer seems to me to be more effective, if harder). Or you can borrow the audience from something else, like HN, Reddit, PH.

All of the steps of launching and building a business, especially the very first ones are enormously easier if you have an audience. And enormously - some times impossibly - harder if you do not have one. This is derived from the adage that your first idea is always wrong. You have to learn everything by shipping early and talking to customers, but if you ship early to 20 eyeballs and talks to luckily one or two potential users, this feedback loop just doesn't count. But if you launch to thousands to eyeballs, even if you are making the mistake of no shipping early or not focused on talking to customers, you will receive unsolicited feedback from some of them.

And a previously built audience of people who trust you at a basic level will be much more responsive and engaged in rationalizing and vocalizing their opinions about your product. They will be "early adopters" type (hard to get that when you buy an audience). And they also will (likely) be people that are your target audience (hard to get that when you borrow an audience from HN and similar).

Some of the most impressive successful business in IndieHackers (according to my own criteria) are the ones who had years of building an audience. Through foruns, email lists, blogs.

A genuine audience is hard to plan ahead. Does not seem plausible to have the vision, the will, and the diligence to think about the industry/niche where I want to launch a business years ahead. So it is good to launch a business where you have genuine passion, somehow you will have the network (even so, not necessarily a large audience).

If you are in that position of having a large audience, seriously consider launching a business because you have a very big unfair advantage (and follow all the advice on this list). If you are not, pay a lot of attention on how to reach your target audience. It is a very tough problem to crack.

11
Oras 1 day ago 2 replies      
Nice article, I would add:

1. "Be patient" as success does not appear overnight.2. "Don't be afraid to fail". It's hard to get it right from first time.

I guess second point is covered in ship,ship,SHIP :)

12
cbar_tx 11 hours ago 0 replies      
digital nomadism doesn't mean anything.
13
jacobrobbins 1 day ago 0 replies      
this is a great resource, a lot more stuff in here than a typical blog post
14
Silhouette 23 hours ago 0 replies      
A lot of this seems to reflect the consensus among experienced HN posters as well: you have to actually ship something and charge for it, it has to be something people actually want instead of just what you enjoy making, and so on. Arguably much of this is just common sense, but maybe that's just hindsight talking.

The only one I strongly disagree with is "raise your prices". While this is common advice on HN as well, I think it should come with a caveat that it mostly applies to B2B businesses. If you're running a B2C business, your customers are spending their own money, and possibly on something that isn't a necessity and isn't immediately going to make or save them a greater amount of money in return. Customers in this situation may well be extremely price-sensitive, and even small adjustments in pricing can have dramatic effects on conversion rates. As a counter-example showing that a big price cut can be effective, look at the way that games suppliers like Steam and GoG run their sales.

15
dahoramanodoceu 1 day ago 1 reply      
I work one hour a morning. W00t!
4
Google launches Hire, a new service for helping businesses recruit techcrunch.com
527 points by tashoecraft  2 days ago   334 comments top 54
1
bigtones 1 day ago 2 replies      
The 'Hire' product just launched today actually came from an acquisition, it was not developed internally by Google at all.

It was developed by Bebop, a company Google acquired last year for $380 Million that was founded by Diane Greene (Founder of VMWare). When Google decided to bring Diane Greene on full time to run Google Cloud, they had to purchase her company in order to facilitate that. Bebop originally had aspirations to shake up the enterprise software space by building a suite of applications, and the 'Hire' app was the first one they produced. Bebop employees have been working on it since then and had a long beta period before the launch today. As noted in the TechCrunch article, the 'Hired' platform also runs the Google for Jobs website that launched earlier this year.

http://fortune.com/2016/01/04/google-paid-380-million-for-di...

https://www.crunchbase.com/organization/bebop

2
agentgt 2 days ago 10 replies      
My company makes recruiting software and it we knew it was only a matter of time (a couple of years ago... now its obvious) before Google would enter the recruiting industry.

Particularly because the major players other than LinkedIn basically rely entirely on Google. All Indeed, Monster, CareerBuilder do is buy Google Ads and then essentially resells the marketing. This is similar to the situation TripAdvisor is in but generally worse.

The current job boards / job aggregators (the companies above) are terribly unimaginative, generally not helpful and often price gouge companies (if you wondering the what the difference is between aggregator and job board... there really isn't much but one pulls the jobs aka crawls the web for them).

Now the boards/aggregators are trying to become more service based and offer higher value offerings for long term strategic reasons. Indeed is rolling out "Indeed Prime" and I believe Career Builder is offering something similar as well.

So as creepy as it is that Google is in the recruiting space I am optimistic that will finally provide some spark of innovation that is much needed in a very high touch industry devoid of it.

3
lettergram 2 days ago 17 replies      
I for one, will never use a Google product again. They level of creepiness and data they have on all their users is insane.

I personally have switched to DuckDuckGo, Fastmail, FireFox (the mobile browser is awesome btw), and will replace Android as soon as a viable alternative is presented. Smooth sailing.

The problem with Google, is they are building a repository for themselves, but also for government. Couple that with their willingness to kill products, and you'd have to be insane to trust their services.

4
b3b0p 2 days ago 1 reply      
Any reason this is a link to TechCrunch instead of the original sources?

Google Hire: https://hire.google.com

Google Blog Introducing Hire: https://www.blog.google/products/g-suite/google-introduces-h...

5
inetsee 1 day ago 4 replies      
What I'd really like to see is a database of information about employers that's not readily available until you become an employee, like non-compete agreements, intellectual property assignments, etc.

If you've quit your job, sold your house / given up your apartment, moved all your stuff to a new city, then on your first day at your new job they hand you a stack of documents to sign including some (like non-compete agreements) that would have significantly influenced your decision to accept the job offer, then the employer has a great deal of leverage to get you to sign those documents.

It would be nice if this kind of information were more readily available in advance to people considering job offers.

6
Androider 2 days ago 13 replies      
I was just about to select a recruitment management service. Both Lever and Greenhouse has "call us" pricing, and sadly Google Hire follows the same trend. Only Workable has a clear pricing page, which gives them a big plus in my book. Since we're on G Suite, Hire could be a natural fit but the requirement to book a demo (and the accompanying upside-down shaking to see how much money falls out of our pockets) is a big turn-off.

Would love to hear experiences from someone who has for example tried both Workable and Lever.

7
nealmueller 2 days ago 1 reply      
This product is built by the team that joined through bebop, a $380M acquisition which came with Diane Greene, who now heads Google Cloud (Chief of Cloud), and Bogomil Balkansky who now heads Hired (VP).

https://venturebeat.com/2016/01/04/google-paid-380m-to-buy-b...

8
endorphone 2 days ago 4 replies      
A question I've always wondered -- If Google, or an employee at Google, used ML (or just classic techniques) to analyze the enormous, enormous troves of trojan horse data they have on the employees of virtually every organization across the globe, would it be illegal if they traded on that data? They could surely accurately call virtually any trend before anyone. They could see when morale at a firm drops, when hours drop, when employees start trying to get jobs elsewhere, etc. It is simply shocking the amount of data they have coalesced.

And while I generally have a good opinion on them, a product like this just seems like a step too far and risks threatening the trust users have in them.

9
hunvreus 1 day ago 2 replies      
We've tried Workable, Lever and about half a dozen other platforms. None of them stuck.

What has worked is to do everything in GitHub.

We wrote about it a few years ago [1] and should probably write an updated version of our approach, but in a nutshell;

- When candidates apply through our online form on our website or via email, we create a GitHub issue and assign it to the right person on our team.

- Everybody on the team gets to see who's applying and can easily take part into the discussion.

- We wrote a few small Chrome extensions that act as helpers for managing applications. For example Gdocs Preview [2], which allows us to display the preview of attachments (i.e. resumes) in the GitHub issue directly.

- We've added some automation with Zapier [3] (and some Python) to do things like;

 - Automatically close an issue and email the candidate if we label the issue as "To reject", - Automatically pull email answers from the applicants in the issue thread, - Automatically send an email asking people to book an interview time with Calendly [4] - ...
Now, this works mostly because we're a software company first and we're using GitHub for everything [5]. Additionally, compensations and feedback on applicants are shared with the whole team, not sure every organization would be comfortable with that level of transparency.

The main benefit is that there is virtually no friction for team members to help out with recruitment and share their opinion.

It's also part of our on-boarding to point new employees at their recruitment issue; they get to see what we discussed and how we perceived them through the process.

[1]: https://wiredcraft.com/blog/github-as-your-recruitment-platf...

[2]: https://github.com/Wiredcraft/gdocs-preview

[3]: https://zapier.com

[4]: https://calendly.com

[5]: https://wiredcraft.com/blog/github-for-everything/

10
sergiotapia 2 days ago 1 reply      
Man that UI is ugly. Material Design was a mistake for Google, all of their products look this way. Even the settings page on Chrome looks really bad and confusing.
11
nfriedly 2 days ago 0 replies      
Here's a link to the actual product: https://hire.google.com/
12
skummetmaelk 2 days ago 1 reply      
Soon companies will be able to specify wanted personality traits and Google Hire will only recommend them people who visit a certain set of websites and a specific social circle.

Fun times ahead.

13
pasharayan 2 days ago 6 replies      
It's easy to forget, but LinkedIn doesn't have rich resume data of candidates. With Hire, Google now gets rich resume & employee data - data, when coupled with search history, that can now be used to build better user profiles than before.

Given this, "Hire" is (or could become) a trojan horse into replacing the network effect that LinkedIn has created.

14
ngrilly 2 days ago 1 reply      
Seeing Google launch a SaaS product, in a specific domain mostly unrelated to its core business, is a bit worrying in terms of monopoly abuse.
15
mxuribe 1 day ago 1 reply      
Because I'm actively looking for another job, i instantly started thinking, "so how can i hack this to my benefit". Other than having my public profiles easy to have indexed by google, and continuing to apply for relevant jobs, I'm not seeing an angle...at least not one that benefits candidates (like myself) applying for jobs. Unless i'm missing something...?
16
MarketingJason 2 days ago 1 reply      
>"Pricing is based on the size of your organization"

Anyone have an idea of cost?

17
crb002 2 days ago 0 replies      
Google skims top talent for themselves and feeds the rest out?
18
dividezero 1 day ago 0 replies      
What is Google doing in this space? Seems like monopolization at its finest
19
amelius 2 days ago 0 replies      
Another service where "you" are the product.
20
6stringmerc 1 day ago 0 replies      
...oh, another proprietary system for seekers to jump through a la Taleo? Sounds good to me, as I'm on the other side of the fence with my concept and target market, and will take as many big, heavyweight, Job-Poster oriented targets possible when getting ready to debut or share more about the project.

I wonder if you can skip the "Background Check" as an Applicant just by letting Google run a report for them on your GMail account profile haha.

21
fauigerzigerk 1 day ago 0 replies      
So how long until Google offers screening/scoring/vetting of applicants based on the data it has about all of us?

All voluntary of course, if you can afford to decline. This isn't far fetched. Employers have asked to access Facebook accounts before, which is extreme and will never catch on (I think).

But allowing Google to rank and match applicants to particular roles may seem harmless enough to many, and then it will become very difficult for anyone to say no.

22
rdtsc 1 day ago 0 replies      
I bet in the future this will be enhanced with Google providing various stats and scores based on the social and advertising profile people have.

For example a % score with something like this: "Trustworthiness", "Political activism", "Obedience", "Addictions", "Laziness", "Morality", "Extroversion" etc.

It will be opaque and derived from their "secret sauce" by scouring GMail, DNS queries from your IP, phone calls you made on your Android phone, stuff you bought and search for in Chrome and so on.

Some companies and even landlords check credit scores when you apply. Criminal records. Border patrol checks social account postings. Imagine having access to all non-public stuff Google and Facebook has.

I wonder if that some point they'd know people better than people know themselves at. I kinda experienced that with Netflix when I was a subscriber. It suggested movies that at first glance I wouldn't think I'd enjoy but the algorithm had figured me out enough that if I took the suggestion, it usually was right and I ended up liking the movie. It was a pleasant and creepy surprise at the same time.

23
richardkeller 1 day ago 1 reply      
Will be interesting to see how Google Hire is used by agencies, as opposed to companies sourcing their own talent.

Obligatory plug: I'm the cofounder of RecruitDoor [1], an applicant tracking system for recruitment agencies. We've been running in South Africa for a few months now, and we'll be branching internationally next month. (Comments and critique welcome, by the way; we're currently still a startup).

[1] https://www.recruitdoor.com

24
oblio 2 days ago 0 replies      
The interesting thing about this is that it's a very lucrative market and I can see Google pushing this as part of their search results. They've going to have the same kind of Google Shopping or whatever it's called "problem".

If they put this thing front and forward they're going to drive a ton of people out of business just by their sheer visibility, as the Internet's front page.

25
cavisne 2 days ago 2 replies      
While the article says google doesn't want to compete with job boards...

"Millions of job seekers start their search on Google every day. And with Hire, you get a career site thats optimized for Google Search."

I.e. You will get a site that will no doubt be right at the top of google jobs. The aggregation value of the job boards seems pretty minimal then if most candidates start on google anyway

26
bamboo_7 2 days ago 2 replies      
"Google says Hire is meant to help businesses do away with manually tracking candidates. "

Right, because there's no other software that exists to track applicants: http://www.softwareadvice.com/hr/applicant-tracking-software...

27
andrewstuart 1 day ago 1 reply      
I'm building something new in the hiring space which I hope to release soon.

When you hear about some new entrant to the space that you are creating a new product for, there is the inevitable "gulp" as you urgently scan their features to see how similar it is to your new thing, especially when the product is from a giant company like Google.

Fortunately in this case Google Hire just looks like another "me too" similar to all the other products. I'm hoping what I have built is actually some fresh thinking. Now if can just grind through the remaining tasks in that never ending task list....

28
fara 2 days ago 0 replies      
If you are not comfortable using Google products, we are currently using Pipedrive to mange the Recruiting pipeline and has proven to be great. Haven't tried Hire but seems to be about the same. With Pipedrive you can customize your process, setup reminders that integrate with your Google calendar, search for old canditates, use the budget field for salary, forward emails to keep the history log, add custom fields, etc.They wrote a blog post about this with some hints https://blog.pipedrive.com/2014/06/how-to-use-pipedrive-for-...
29
joepour 1 day ago 0 replies      
I wrote a fully featured ATS last year that is currently for sale: https://flippa.com/8808403-growbeam-com

Seems like there are a lot of people in this thread who might find this relevant.

30
gorbachev 2 days ago 1 reply      
With some luck this will be the death blow to Taleo and the other legacy ATS vendors.
31
alonshiran 1 day ago 0 replies      
Smart move. They're building a strong suite of corporate products and creating more barriers to leave it.

Kind of similar to Apple's hardware products that are strongly connected.

Too bad though that google is killing another industry on the way...

32
popopobobobo 1 day ago 1 reply      
Don't you guys realize what is happening? Google potentially can screw you with your careers now. Image they serve up your resume along with the type of porn you searched in the footnote.
33
Bedon292 1 day ago 0 replies      
We were just about to switch to Lever, and then this comes out. Seems like a nice option when we are already using GSuite for everything. Really curious their pricing and abilities though.
34
delegate 2 days ago 1 reply      
I wonder why I don't see candidates jumping up from joy when they hear about it ?

Don't you want to be conveniently tracked, compared, analyzed and disposed of with a couple of clicks ?Don't you want your professional profile to be sold to the highest bidder some time in the future ?

I mean, I'm sure the software brings order and ease to the hiring process, but somehow this doesn't make me feel good.It makes me nauseous ... from how insignificant and mechanical this makes us.. people treating people like data...

35
jxramos 1 day ago 0 replies      
What does their usage of the term "verticals" denote?

>>>While Hire itself is interesting in its own right, its also interesting to see that Google is now looking to use the G Suite tools and back-end services it has developed over the last few years to solve problems in very specific verticals.

36
mintplant 2 days ago 0 replies      
> Hire offers businesses a cohesive applicant tracking service thats deeply integrated with G Suite to make it easier for businesses to communicate with their candidates and track their progress through the interview process.

This seems odd coming from Google, as they're notoriously inconsistent (and/or manipulative) with communication during the application-interview-hiring process.

37
Animats 1 day ago 0 replies      
Will this mean that you can't get a job without a Google account?
38
philip1209 1 day ago 1 reply      
I can't find a way to contact support (no surprise for a google product) - so if anybody from Google Hire is lurking, the "Webinar" button after the lead capture page leads to a 404 https://imgur.com/a/BIQlZ
39
sandGorgon 1 day ago 1 reply      
I'm in the market for an ATS. none of which offer anything for less than 100-300$ per month. That's more than what o spend on other core API.

Is there an ATS at the ~10$ price range ?

41
seanwilson 1 day ago 0 replies      
Can anyone explain the logic behind picking commonly used words as product names? I know this is Google, but isn't it hard to rank well with such generic names?
42
joshfraser 1 day ago 0 replies      
Since when did Google become a "request demo" company?
43
CryoLogic 1 day ago 0 replies      
On the plus side, while google may have an advantage for run of the mill jobs - niche sites like stack overflow careers will always have a better ROI for specialized positions.
44
bhartzer 2 days ago 0 replies      
I'm surprised they didn't launch on hire.google, and chose to launch it on hire.google.com instead. Apparently Google doesn't see the .Google TLD as a priority for them.
45
spullara 1 day ago 0 replies      
It is unclear to me why many people are comparing this effort to a job board. It is pretty clearly an ATS and competes with Jobvite and not Indeed.
46
fastball 1 day ago 0 replies      
I'm guessing "Hired" doesn't have a trademark on the name?
47
learc83 1 day ago 0 replies      
Does anyone know what the applicant side of this looks like? I can't find anything about this on their site.
48
PangurBan 1 day ago 1 reply      
This opens the possibility of launching further G Suite initiatives focused on other business problems and sectors
49
Fricken 1 day ago 0 replies      
I'm surprised it took them this long. I was wondering why Google didn't get into this space a decade ago.
50
chris__butters 1 day ago 3 replies      
Let's see how many businesses are destroyed and how much money Google makes because of this.

It's a shame they can't just stand back and let people use websites like Indeed, Monster, JobSite among others and let them get on with it and just make money from the ads.

Do they not make enough already?

51
redindian75 1 day ago 0 replies      
52
balls187 1 day ago 1 reply      
We use Jobvite. Works pretty well.
53
redindian75 1 day ago 0 replies      
anyone know how they made those nifty animations - was it handcoded?
54
olivierva 2 days ago 0 replies      
Look another product they will retire in about a years time.
5
Google relaunches Glass for businesses x.company
481 points by tsycho  1 day ago   263 comments top 41
1
aresant 1 day ago 13 replies      
This is so !@%!@ cool.

Check out the A/B test of a technician with / without the software referenced in the article:

https://www.youtube.com/watch?v=E5gXuZp25f0

Then here's a video that gives a sense of the software's interface:

https://www.youtube.com/watch?v=z5HOHNECW20

Very workflow oriented with nice communication and lookup features.

This is the kind of small optimization stuff that is going to be revolutionary to driving macro productivity.

Amazing!

2
payne92 1 day ago 10 replies      
It's easy to armchair quarterback this stuff after the fact, but this is where Glass should have started. The price point, social stigma/issues, and use cases all screamed "business applications!".

Consumer tech may be where the glamour and scale are, but it's not always the best market entry point.

3
gfodor 1 day ago 2 replies      
I've never understood why they insist on having the device be visually asymmetric. Just put a piece of plastic on the other side that is non-functional, and the "cyborg effect" basically goes away. The human brain hates asymmetric faces. Such a stupid oversight, may have been enough to save the consumer effort if they did this from the get-go.
4
kharms 1 day ago 8 replies      
>>Glass is also helping healthcare professionals. Doctors at Dignity Health have been using Glass with an application our partner Augmedix calls a remote scribe.

My primary care doctor has a human scribe. The scribe is a recent graduate (BS), planning on going to med school next year. Being physically in the room, watching the doctor work is a great benefit to her. I'm not sure she'd benefit as much from watching a live stream.

Additionally, as a patient I wouldn't be comfortable being recorded.

5
jessriedel 1 day ago 1 reply      
> The mechanics moved carefully, putting down tools and climbing up and down ladders to consult paper instructions in between steps... Fast forward to today, and GEs mechanics now use Glass running software from our partner Upskill, which shows them instructions with videos, animations and images right in their line of sight so they dont have to stop work to check their binders or computer to know what to do next.

The article makes it sound like Google glass is the first to do anything like this, and it was all paper manuals before that. In fact, aircraft manufacturers have been using smart glasses for years to augment workers.

http://www.engineering.com/AdvancedManufacturing/ArticleID/1...

Maybe Glass is a significant improvement, but it's not unprecedented.

6
ajmurmann 1 day ago 0 replies      
This feels like it's getting us one step closer to the vision of the early stage control by AI that is described in Manna: http://marshallbrain.com/manna1.htm

Obviously the AI part isn't there but we now have a fabulous interface to have complex tasked aided/guided by AI. This in combination with what's already going on in Amazon warehouses and we are pretty close to the description of how fast food restaurants are run in the story.

7
randomf1fan 1 day ago 1 reply      
I'd be really interested in hearing from someone who uses this on the floor. Is it really all they say it is? The marketing and PR looks good, but do mechanics really love it?
8
oliwarner 1 day ago 8 replies      
Some of this stuff is so obviously cool but remember the cost of efficiency. 30% time saved means one person does more in their day. This has a personal toll because working less intensively gave you time to think and physically rest. You're at it non-stop now.

And that also means you need 30% fewer employees to manage the same workload. That's going to be the trade-off here. How many people will have to go just to offset the hardware and software costs?

I don't know what I'm arguing here... I'm finding it hard to avoid quoting Ian Malcolm in the context but I think we have to remember there are definite downsides to treating people like underutilised machinery.

9
djsumdog 1 day ago 3 replies      
This is cool and all, and I'm glad the concept didn't totally die, but will consumers ever see this type of wearable tech again? Some people shelled out over $1k when Google initially offered glass prototypes, only to be left with unmaintained devices.

I honestly though Glass would have done better if it had no recording capabilities built-in. It would have substantially reduced the creepiness factor.

It's sad that no one has come in and tried to tackle the heads-up wearable market. Sony has some glasses that looked terrible, and I guess the battery life issues are still too big for many manufacturers to overcome?

10
NamTaf 1 day ago 0 replies      
This is precisely the application I first imagined when I saw it. Being able to pull up exploded views of an assembly, having reference information for something you've got both hands inside, etc. is invaluable.

I can't count the number of times I've had to extract both my arms from inside a machine (in doing so losing the information of where precisely i'm holding stuff and the bearings that provides you), wipe off all the grease, dust, grime, etc., thumb through a pile of papers that still get dirty and then mentally translate a 2D drawing to what I'm working on, only to then lose my place and have to work it all out again. Having something voice controlled and right there in front of my eyes would be invaluable.

Industry really is the perfect environment for this. Safety issues notwithstanding (which you can work through), it's really the best application of this technology and you can quickly quantify a RoI from its implementation.

11
mafuyu 1 day ago 1 reply      
Sounds like they've found a great enterprise usecase, and I hope they can keep improving the device to bring it to consumers once again.

I was able to snag a Glass for a good price when they killed support (before selling it off again after a couple months). I enjoyed using it, and being on a college campus at the time reduced some of the social awkwardness. I could push notifications to my face with IFTTT and the voice recognition worked reasonably well. Ironically, I found the most useful feature to be the camera. It's liberating to be able to wink and get a snapshot of whatever is in your field of view, whether it's some info you want to remember or a small moment you want to share. I'm on vacation now and find myself fumbling with my phone to take snaps of interesting things I want to share way too often.

12
dfee 1 day ago 0 replies      
I was an early member of Pristine (the Google Glass company that Upskill recently acquired) where I began as a developer, and then landed our first paid deal at the end of 2014.

We used to buy the glasses for $1500 a piece and had a probably 50 pairs of them lying around by early 2015.

The engineering team was great - while I was there it felt like we were flying blind wrt Googles official support. From a business perspective, Im not sure product market fit was really ever achieved, though after I left the company expanded its horizons beyond healthcare / telemedicine.

Good luck to Upskill :)

13
e12e 1 day ago 1 reply      
I'd be more exited if it looked like Google had addressed some of Steve Mann's critiques from the initial announcement, but as far as I can tell, the critique is still not addressed:

http://spectrum.ieee.org/geek-life/profiles/steve-mann-my-au... (For discussion on Glass design, look for the paragraph starting: "I have mixed feelings")

FWIW it appears Mann is working with a company on a different system for mediated reality:

https://www.metavision.com/

14
0xfab1 1 day ago 1 reply      
That makes me want a pair that's installed with an app that shows relevant stackoverflow answers as I code.
15
tlb 1 day ago 0 replies      
It seems disingenuous to pitch it as an improvement over a ring binder of documents. The question is whether it's an improvement over an iPad on a stand beside your work.
16
morley 1 day ago 3 replies      
Has anyone here worked on a Glass app? How is the UI programmed? Does it require a special programming paradigm like VR, or are these instruction manual HUDs basically just PDF viewers?
17
Zhenya 1 day ago 3 replies      
No info on the HW, that's the interesting part. How did they solve battery life issues etc
18
bhnmmhmd 22 hours ago 0 replies      
It's probably not related to this article per se, but isn't it weird that "x.com" is owned by Elon Musk while the actual company - which is a branch of Alphabet - has to use "x.company"?
19
tqi 1 day ago 1 reply      
For the manufacturing example, why is this better than having a simple 7-10 inch tablet? For the doctor example, why is this better than a simple body cam, or even a camera that is wall or desk mounted?
20
amelius 1 day ago 1 reply      
A few questions:

- Can only businesses buy Glass?

- What is the pricing model? Is this sold as a product, or is it paid for as a subscription service?

- Will there be a "play store" equivalent for software for Glass?

21
phyller 1 day ago 1 reply      
Nice. This is what the original launch should have been like. Help doctors use this to provide better care and no one is going to be calling anyone a "glasshole"
22
tastyfreeze 1 day ago 0 replies      
I have always thought that the perfect application for wearable head mounted displays would be mechanical repair. I would love to be able to pull up the maintenance manual for my vehicle and have the glasses overlay part names, fastener sizes, and torque settings for whatever part I was looking at. This is a step in that direction.
23
irrational 1 day ago 0 replies      
I'd love to have something like this to show me step by step how to do repairs on my car or an appliance. Recently I was fixing the turn signal lever in my car by referring to youtube videos. It would be awesome if I could just download to the device a file for whatever thing I need to fix and it just walks me through the steps.
24
beebmam 1 day ago 0 replies      
I actually am really sad Glass didn't take off. When it was available, I was unemployed. I'd love to get my hands on a new version of it, if google were ever to release a new version for the public.
25
theptip 1 day ago 3 replies      
This seems to me the obvious use-case for AR technology, but Hololens looks further along than Glass; I'd be interested to see where they have got with any comparable Hololens projects.
26
LyalinDotCom 1 day ago 0 replies      
Kudos to Google for continuing to invest into this product, it really has long-term potential and we need all the big players in this space to drive competition forward.
27
BatFastard 1 day ago 0 replies      
SAD, its black and white! I recall Mondo2000 challenging every color scheme ever created. Lime greens and oranges!
28
omot 1 day ago 3 replies      
I don't know what it is about it, but the design is so cringey. Even on a professional, it just looks... bad.
29
projectramo 1 day ago 4 replies      
I know Glass, or something like it, will be great. But I don't know which combination of features + killer app will unlock the thing.

HN has probably mentioned this before but is there a reason Google makes announcements on Medium and not on Blogger?

I would do the same thing if I had a choice, but Google could just make the formatting on Blogger better.

30
stutterSpeaker 1 day ago 0 replies      
I hated it when they abandoned it, because I was so excited about the tech for so long. I think maybe the public just wasn't ready for it then. They probably still aren't now, but this could be an excellent way for them to get more comfortable with it.
31
dragonwriter 1 day ago 0 replies      
Headline is neither source headline not technically accurate; while X started life as Google X before the Alphabet reorg, it's a separate subsidiary of Alphabet from Google.

This story is about X, not Google.

32
singularity2001 1 day ago 1 reply      
x.company interesting domain name! one letter 'domain' six letter TLD
33
zxcvvcxz 1 day ago 0 replies      
Super excited, that top image is literally me in the garage sometimes, albeit with safety goggles.

Can these provide eye protection from bits of flying metal while one is drilling?

Will keep digging through the page.

34
ensiferum 1 day ago 0 replies      
I'm sure standard software TOS apply. I.e. Software vendor is not responsible for any errors omissions or mistakes it presents to the mechanic.
35
mshiran 1 day ago 0 replies      
very cool!
36
pessimizer 1 day ago 0 replies      
This was completely predictable, and what everyone suggested as Glass failed for consumers; the fact that they're doing this now doesn't necessarily represent any success or particular efficaciousness discovered during the pilot programs. The fact that it took this long to roll out and announce publicly is a bad sign, though. They may have just run out of time, and were forced from above to make their best try at it.
37
matunixe 1 day ago 1 reply      
38
sharemywin 1 day ago 0 replies      
Aw...how cute they look like little borg lite.
39
megamindbrian 1 day ago 0 replies      
This is such bullshit. I applied for their private beta and never got a response.
40
j_hall_in 1 day ago 0 replies      
Looks like they are using the strategy of Microsoft HoloLens here which I think makes sense. There isn't enough wide-spread value add in these augmented reality headsets for general consumer use yet, but businesses will help drive innovation until that time comes.
41
MBCook 1 day ago 3 replies      
A lot of fluff here, but not much substance. I see how having large manuals or paper lists in your field of view could be very useful.

Does it work well for employees with classes?

I assume they've updated the chip inside to something less power-hungry. Does it get reasonable battery life now?

Why do doctors need Glass to record notes in the background? Couldn't any computer run that software?

6
Apple Machine Learning Journal apple.com
460 points by uptown  1 day ago   116 comments top 18
1
exhilaration 1 day ago 7 replies      
For anyone curious about why Apple is allowing its researchers to (anonymously) publish papers like these on an Apple blog, it's because of this:

Apples director of AI research Russ Salakhutdinov has announced at a conference that the companys machine-learning researchers will be free to publish their findings. This is an apparent reversal of Apple's previous position.

Refusing permission to publish was said to be keeping the company out of the loop and meaning that the best people in the field didnt want to work for Apple.

From: https://9to5mac.com/2016/12/06/apple-ai-researchers-can-publ...

We will see whether this move is sufficient to attract the top talent they're looking for.

2
skywhopper 19 hours ago 2 replies      
Is anyone else amused by the irony of using machine-learning-trained image generator in order to provide data to a machine-learning-trained image recognition program? I'm sure the researchers themselves and plenty of people here could come up with all sorts of logical reasons why this is fine, and very possibly given the right protocols it would be fine. But this sort of approach seems to lend itself toward increasing the risks of machine-learning. ie, you're doubling down on poor assumptions that are built-in to your training criteria or which creep into the neural net implicitly, because you are using the same potentially flawed assumptions on both ends of the process. Even if that's not the case, by using less real, accurately annotated data, you're far less likely to address true edge cases, and far more likely to overestimate the validity of the judgments of the final product compared to one with less synthetic training. And if there's one thing the machine learning community doesn't need any more of, it's overconfidence.

Edit: oops, turns out I mistakenly responded to the content of the paper instead of the fact that it exists and the form of its existence. Sorry.

3
bschwindHN 4 hours ago 0 replies      
Going off on a bit of a tangent, but I feel like Apple's niche in AI will be with on-device processing. The iPhone 7 already has an FPGA onboard, and I would guess the next iPhone will have more/more powerful chips. Training would probably still have to happen on their servers though due to the dataset sizes needed. I might just be full of shit though, I'm not much of an AI developer.
4
Stasis5001 23 hours ago 1 reply      
A lot of academic papers actually aren't all that great, for a variety of reasons. Normally, you can use citations, journal, and author credentials to get a sense of whether a paper is even worth skimming. The only "paper" on the "journal" right now looks like it's just a watered-down, html-only version of https://arxiv.org/abs/1612.07828!

Seems like more of a PR stunt than anything useful, but who knows.

5
zo7 22 hours ago 0 replies      
It's interesting how much criticism they're getting because Apple formatted their blog to be anonymous and watered down, but they're clear in their first technical post that it is just an overview of work that the researchers are presenting at CVPR [1].

So the researchers at Apple are still getting credit for their work in the scientific community, but the PR-facing side of their work is anonymous, probably for some aesthetic reason (this is Apple, of course)

[1]: https://arxiv.org/abs/1612.07828

6
ericzawo 1 day ago 4 replies      
The most Apple thing ever is that they called it a "journal" and not a "blog."
7
KKKKkkkk1 1 day ago 4 replies      
Why no author names on the article?
8
tedmiston 1 day ago 2 replies      
The top comment on Product Hunt from Ryan Hoover raised a good point about Apple's timing with this:

> This launch is particularly interesting because this isn't typical for Apple, a fairly secretive and top down company (when it comes to external communications). Timing makes a lot of sense with their upcoming launch of ARkit, Apple Home, and the inevitable "Siri 2.0", among other things.

https://www.producthunt.com/posts/apple-machine-learning-jou...

9
pseudometa 23 hours ago 0 replies      
I would really like to see the names of people who are working on the research. They reference other papers and give their authors credit, but was disappointed to not see the Apple employees get credit.
10
Angostura 21 hours ago 1 reply      
I'll be keeping an eye out for acrostics with the author's name.
11
mattl 1 day ago 5 replies      
What's powering this site? Doesn't look like WebObjects.
12
acdha 23 hours ago 3 replies      
I really wish this had an Atom or RSS feed
13
gjvc 7 hours ago 0 replies      
I'm betting that sjobs would not have approved this
14
plg 20 hours ago 0 replies      
I like the font. Is is possible/legal to use the SF Pro Text webfont?

PS I know the desktop font is available for download at the apple developer site ... but I'm talking about the web font

15
0xCMP 1 day ago 2 replies      
So we're def getting some form of facial recognition in the new iPhone with stuff like this being published.

Feels like an early post to show they've done some advanced work in making sure you can't trick them.

16
mrkrabo 23 hours ago 0 replies      
No <title>?
17
dekhn 23 hours ago 0 replies      
calling this a "journal" and making it anonymous is disingenuous.
18
joshdance 1 day ago 0 replies      
Surprising. Hopefully we see more of this.
7
Show HN: A Set of Dice That Follows the Gambler's Fallacy github.com
494 points by xori  23 hours ago   222 comments top 46
1
andy_wrote 21 hours ago 4 replies      
There's a probability model called the Plya urn where you imagine an urns containing numbered balls (colored balls in a typical example, but to draw the comparison with dice we can say they're numbered 1-6), and every time you draw a ball of a certain color, you put back more balls according to some rule. A few probability distributions can be expressed in terms of a Plya urn, see https://en.wikipedia.org/wiki/P%C3%B3lya_urn_model.

A fair 6-sided die would be an equal number of balls numbered 1-6 and a rule that you simply return the ball you drew. You can get a gambler's fallacy distribution by, say, adding one of every ball that you didn't draw. I read the code as a Plya urn starting with 1 ball 1-N and doing that on each draw plus reducing the number of balls of the drawn number to 1.

Also related, in 2d space, is the idea of randomly covering the plane in points but getting a spread-out distribution, since uniformity will result in clusters. (If you're moving a small window in any direction and you haven't seen a point in a while, you're "due" to see another one, and vice versa if you just saw a point.) Mike Bostock did a very nice visualization of that here: https://bost.ocks.org/mike/algorithms/

2
ideonexus 22 hours ago 18 replies      
A great application for this is in randomizing playlists. My friends, who are also CS grads and should know better, have often complained that their MP3 players, CD carousels, etc play the same music too often claiming that the random is broken, when a song repeating in a short period of time or other songs never playing is what you would expect from a truly random selection. Using this algorithm, you'd be sure to hear all of your songs. I'm guessing most music services already do something like this.
3
Pfhreak 22 hours ago 6 replies      
Interesting, and at first I was excited about the possibilities in something like D&D, where a series of bad rolls can have you feeling down. "I'm due for a critical hit any swing now..."

Players would love that! Make my hero feel more heroic! The inevitable comeback!

But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.

We have a lot of emotions around dice rolling. I wonder if players really want from their dice. Would players prefer dice that are secretly unevenly weighted towards good rolls? Would they still want those dice if they knew they were weighted?

4
doodpants 20 hours ago 2 replies      
> I made a chatbot that rolled dice, and it was constantly criticized for being "broken" because four 3's would come up in a row.

> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.

I am reminded of the approach that GamesByEmail.com used to address this criticism:http://www.gamesbyemail.com/News/DiceOMatic

5
nickm12 21 hours ago 1 reply      
I had the privilege of studying probability from G-C. Rota. One of my favorite quotes from him was "Randomness is not what we expect", which he used to describe the phenomenon of people disbelieving that random data was actually random. Another great was "This will become intuitive to you, once you adjust your intuition to the facts."
6
cableshaft 22 hours ago 1 reply      
There was an old flash video game I worked on a long time ago where I did exactly this. I had a boss with two main attacks, and I didn't want it to be super predictable A/B/A/B, so I had it pick between A and B randomly, then reweight the probabilities, so if it picked A, instead of 50% A, 50% B it'd now be 25% A, 75% B. If it picked A again it'd be down to like 12.5% A, 87.5% B. If B then got chosen, it'd flip flop to 75% A, 25% B, etc. The result was it mostly went back and forth between the two, but would do some attacks 2 or 3 times in a row before switching back to the other.

You can actually play it right here and go direct to the Boss Fight if you wanted to: http://briancable.com/clock-legends/

7
vanderZwan 19 hours ago 0 replies      
A similar, simpler idea is sometimes used in games: you put all choices in a "bag", then draw from the bag until it's empty, then put everything back.

Tetris is the go-to example. Tetris has seven tetronimos, and in most modern implementations you're guaranteed to see them in sets of seven in random order.

http://tetris.wikia.com/wiki/Random_Generator

This is pretty essential to make competitive play err on the side of skill rather than randomness: pro-players can anticipate and plan for this. For fun, here's a recent Tetris head-to-head speed-run from Awesome Games Done Quick, with pretty good narration about the tactics involved:

https://www.youtube.com/watch?v=PeNB4w99FiY&t=1h21m15s

8
bigato 22 hours ago 5 replies      
Now I really want physical loaded dice which follow the gambler's fallacy! Is it too crazy of an idea?
9
hesdeadjim 22 hours ago 2 replies      
This reminds me of Sid Meier's talk at GDC about having to game the random number system because of player's expectations:

http://www.gdcvault.com/play/1012186/The-Psychology-of-Game-...

More often than not, true RNG in game design takes a back seat to fun.

10
Thespian2 20 hours ago 1 reply      
For board games, like "Settlers of Catan" where resources are generated based on rolls of 2d6, one could use an analog version of this with a cup containing 36 chits, of the numbers 2-12 according to the normal distribution, _without_ replacement. You would still get the randomness of ordering, but over 36 draws/turns would get a complete "average" distribution.

If that is a bug or a feature is left as an exercise for the reader.

11
kator 22 hours ago 2 replies      
I often use google "flip a coin"[1] for stupid things and the other day I was wondering why almost every single time it came up heads. I started to wonder if there was a browser rng problem or the code was crazy etc.

[1] https://www.google.com/search?q=google+flip+a+coin

12
mysterydip 3 hours ago 0 replies      
This will be great for game developers as many a player has complained that the RNG wasn't "fair" because they got so many fails in a row or never saw a critical hit or whatever, even though it was mathematically correctly random. Thanks, looking forward to using it!
13
closed 21 hours ago 1 reply      
At first I was confused, because statistical models that aren't temporally independent are very common.

But it's very clear from the comments that having dice that aren't independent between rolls is incredibly in demand :o, and having the right words to google can be tricky.

(I feel like there's an important lesson there)

14
onetwotree 13 hours ago 0 replies      
Video games, especially competitive ones, do this to limit the effect of randomness on the outcome of the game, while still keeping the sequence of random events unpredictable enough to "feel" random and preventing simple exploits.

DoTA2 uses a simple distribution based on the number of "rolls" since the last successful one - P(N) = P0 * N, where P0 is the base probability and N is the number of rolls since the last successful one[1].

It keeps both "hot" and "cold" streaks from being too much of an issue, although that doesn't stop players from cursing the RNG gods when they lose.

[1] http://dota2.gamepedia.com/Random_distribution

15
rivo 8 hours ago 0 replies      
The quote at the end is meant as a joke but it's interesting how often this is true. A lot of magic tricks rely on being prepared for different outcomes, while often trying for the least likely one first. This unlikely outcome happens surprisingly often and therefore makes the effect even more unbelievably amazing.

I had a friend think of a playing card and any number (1-52). She picked the 6 of spades and the number 15 which is exactly the position where the card was located. It was only the third time I had done this trick with anybody.

Obviously, card and number picking is not uniformly random, especially when you influence their choice (e.g. "pick a large number"). But the odds of someone guessing the exact combination should still be extremely low.

A lot of what you see from David Blaine on TV is exactly this. He always has a backup plan but more often than not he doesn't need it.

16
biafra 6 hours ago 0 replies      
How do I run this code? I thikn I succesfully installed the package with npm. There were some warnings but no error. But how do I run:

> const RiggedDie = require('gamblers-dice')

> const die = new RiggedDie(20) // for a d20

> console.log(die.roll()) // 1 -> 20

> console.log(die.roll()) // keep using the same instance

Do I put it in a file?Do I copy paste it into a REPL? If so what provides that REPL?

I am always surprised when sample code providers assume I know their languages ecosystem.

UPDATE: Apparently I can use "node" as a REPL for this.

17
eriknstr 19 hours ago 1 reply      
> I made a chatbot that rolled dice, and it was constantly criticized for being "broken" because four 3's would come up in a row.

> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.

This reminds me of a talk [1] given at Game Developer's Conference (GDC) about the game Civilization, in which the Sid Meyer -- creator of said game -- spent a bit of the time talking about the difference between fairness and perceived fairness. The talk is only an hour long and worth watching.

[1]: https://www.youtube.com/watch?v=AJ-auWfJTts

18
root_axis 21 hours ago 1 reply      
This reminds me of an article discussing the perceived outcome of RNG decisions in video games. In many types of games, the system will display a percentage chance of success for a given action which allows the player to make risk assessments regarding possible choices. Unfortunately, the unmodified RNG behavior creates an unpleasant experience for the user because the unweighted random outcomes feel "unfair" when failure streaks pop-up, thus, game designers almost always introduce some type of magic cushioning to the RNG so that the user never faces too many repeated failures.
19
18nleung 11 hours ago 1 reply      
How exactly does the roll() method work? Can't seem to parse the meaning of `runningSum` and `mark`.

 roll() { const sum = this.state.reduce((p, c) => p + c, 0) const r = Math.random() * sum let runningSum = 0 let result = -1 for (let i = 0; i < this.state.length; i++) { const mark = this.state[i] runningSum += mark if(r < runningSum && result === -1) { result = i this.state[i] = 1 } else { this.state[i]++ } } // Add 1, so the die roll is between 1 -> size of die return (result + 1) }

20
smallnamespace 13 hours ago 0 replies      
This shows up a lot (predictably) in actual games, e.g. Hearthstone sells you digital cards, and the randomization specifically guarantees that the time between rare cards is capped [1].

Having unusually bad luck (e.g. opening 100 packs and not getting a single legendary card, when the average would be every ~20 packs) feels bad and probably loses Blizzard a customer, so the solution is to cut off the downside tail of the distribution.

[1] https://www.reddit.com/r/hearthstone/comments/3z7jyh/pity_ti...

21
_Marak_ 22 hours ago 5 replies      
I'm probably very wrong, but I still feel there is some undiscovered science when it comes to RNG and the fallacy of the maturity of chances ( Gambler's Fallacy ).

Einstein believed the universe was deterministic. Just because it appears to us that there is no correlation between independent events ( the roll of a dice ), does not mean that there isn't some underlaying variable that we are unable to measure or perceive which is affecting the outcome of the roles.

22
YCode 20 hours ago 1 reply      
I wonder if this could be / has been applied to loot tables in video games in order to keep the player interested in playing.

I've designed a few loot tables and the Gambler's Fallacy is a criticism I often have to deal with when people don't understand why a given item hasn't dropped despite them having killed a mob enough times to statistically warrant it.

23
methodin 19 hours ago 1 reply      
It's always nagged me that statistical problems are scoped so small. Surely in saying there's 6 outcomes on a dice you'd obfuscated the billions of interactions between atoms and input possibilities in doing so. Thrower A and thrower B will undoubtedly throw slightly different which might actually constraint the outcomes and skew the 1 in 6 percentages?

It's similar to me to condensing 30 characters to 5 via an algorithm. You can go one direction but not the other and if your model was centered around the resulting 5 it doesn't really reflect what's actually happening which may skew the probabilities quite a bit. e.g. if the algorithm was "if first letter is not q, then first letter in output is q". If you were saying each has an equal percentage of occurring it'd be flat out wrong.

* I am not a statistician and have no idea what I'm talking about

24
erikb 21 hours ago 0 replies      
And then there's also the gamblers wisdom: If the dice was facing a 6 for too many times in a row, look for another game.
25
colanderman 21 hours ago 0 replies      
A simpler (albeit quite deterministic) way of accomplishing this is to use an LFSR [1] or the CRC [2] of an incrementing counter. Such a sequence of values "looks random" under many measures but also has the probability that you will eventually get an even distribution of values (after the period of the counter).

[1] https://en.wikipedia.org/wiki/Linear-feedback_shift_register

[2] https://en.wikipedia.org/wiki/Cyclic_redundancy_check

26
nerdponx 22 hours ago 3 replies      
This "unfair RNG" issue was big in Dota 2 (a popular video game) for a while. They ultimately implemented something similar and AFAIK now all "random" effects use it.
27
biesnecker 19 hours ago 0 replies      
In 2010 I asked a similar question on StackOverflow for choosing the option that would have the correct answer in multiple choice tests: https://stackoverflow.com/questions/3099153/better-random-fe...
28
careersuicide 22 hours ago 0 replies      
Just wanna say: I love this.
29
Tade0 21 hours ago 0 replies      
Kudos for the "I don't get it" section.
30
dmartinez 21 hours ago 0 replies      
I like to think of this as "human random". It's easier to get along with people using this type of algorithm.
31
dllthomas 20 hours ago 0 replies      
Sure, the Gambler's Fallacy has worked out poorly in the past... but doesn't that mean it's due?
32
jonjonjonjon22 21 hours ago 0 replies      
I'm not a programmer but I've thought about this a lot. It'd be interesting to know if my simple solution here has something wrong with it.

My idea is based on time - if we assign each song to a 1/1000th of a second, we play the song that matches the 1000th of a second when the next song is called.

In this case, I'm referring to the 1/1000th of a second of current time of day. Depending on the songs position in the second that I change tracks, is what song gets played.

A bit more randomness (if this is needed) could come if we use Pi - for example, we can run through a series in Pi which adds to the ID of the song. Differing track lengths then do the job of ensuring that we always wind up on a different song in the loop.

The above seems to my layman's eye to be a simpler solution, at least.

33
gpawl 16 hours ago 0 replies      
The NBA Draft Lottery is similar in spirit:

https://en.wikipedia.org/wiki/NBA_draft_lottery

34
dcookie 20 hours ago 0 replies      
This reminds me of the first scene in Rosencrantz & Guildenstern Are Dead. https://www.youtube.com/watch?v=RjOqaD5tWB0
35
zem 18 hours ago 0 replies      
reminds me of a very interesting demonstration from martin gardner. draw a 6x6 grid, and write a random digit in each cell, proceeding row by row. now count the number of pairs of consecutive (x, x) going horizontally versus vertically; you will almost always get doubled numbers in the columns because that's how random numbers work, but almost never in the rows, because when people are trying to generate "random" numbers by hand they avoid runs or other patterns.
36
IncRnd 21 hours ago 0 replies      
Bingo cards (the real ones, not the theoretical ones) often have human interaction as one of the steps in their creation. This is so they appear random, distinct, without patterns, etc.
37
exabrial 21 hours ago 0 replies      
A good application for this would be prevent "Starvation" when doing random rewards in a video game. For instance, if a special item is to be dropped after defeating a boss...
38
snake_plissken 18 hours ago 0 replies      
Random question I've always had issues resolving: if the Gambler's Fallacy is real, how can you detect loaded dice?
39
mwexler 18 hours ago 0 replies      
Finally, Spotify's shuffle algorithm can be fixed! Thank goodness you created this.
40
sebnap 21 hours ago 1 reply      
I think they didn't trust your programming skills as much as the possibility of absurd sequences :D
41
overcast 22 hours ago 0 replies      
"A terrible idea is born." :D I can only imagine what will be made on top of this.
42
andrepd 21 hours ago 1 reply      
Eh, I thought this was something more mathematical, like non-transitive dice (https://en.wikipedia.org/wiki/Nontransitive_dice). Apparently it's... a weighted random number generator? in node.js?
43
mighty_bander 19 hours ago 0 replies      
Pshaw. Make real dice that do that and I'll get excited.
44
ProgrammerMatt 20 hours ago 2 replies      
Can someone explain to me why there is such a large discrepancy for the different sides for math.random()? seems fairly large
45
Sawamara 20 hours ago 0 replies      
This is actually useful for many rpg games.
46
vacri 13 hours ago 0 replies      
On a related note, a colleague once worked at a place where they did this for on-call: Everyone has an on-call score. Every week, the person on-call had their score set to zero, and everyone else incremented by one. You could plan out the next couple of months this way, and it provided an elegant way for new hires to take their place - they start at zero, and were generally familiar enough by the time their number came around.

There were some housekeeping rules to work around the organicness of human life - if someone went on holiday they kept their score, for example - but overall it seemed to work.

8
Master Card, Cisco, and Scotiabank Join the Enterprise Ethereum Alliance entethalliance.org
441 points by 52-6F-62  1 day ago   162 comments top 21
1
dpc_pw 1 day ago 4 replies      
How pointless.

The whole point of Bitcoin was to have uncensorable, decentralized asset that can be used to exchange value without any trust etc.

That is the only reason we endure this utterly shitty and inefficient blockchain thing, and we exchange money for this otherwise pointless online points.

I have no problem with ETH as cryptocurrency / smart contract platform. But this whole Enterprise Ethereum Alliance is just one big BS. Etherum community is trying to pump ETH value by associating with brand names, and corporations are trying to pump their stock value by presenting themselves as innovative. BS - empty words and marketing gimmicks. Just read through that page.

Just watch Blockchain vs. Bullshit:https://www.youtube.com/watch?v=SMEOKDVXlUo

2
abrkn 1 day ago 4 replies      
At this time, there are 78 comments to this post, none of them addressing technical concerns. Quite unusual for HN.

I've devoted my life to cryptocurrency since 2011 and still question whether or not this system even makes sense. It seems too expensive with the technology we have today. Once privacy, such as zk-SNARK, is added, it becomes unreasonable.

Perhaps this is what a bubble looks like. I wasn't there for the dotcom boom. Loss of critical thinking.

3
Asdfbla 1 day ago 5 replies      
Always surprises me a bit because it seems to me like those large corporate players don't necessarily need all the features offered by Ethereum (or Bitcoin), especially proof-of-work, since there's enough non-adversarial cooperation between them that a simple distributed ledger with traditional (efficient) consensus mechanisms would suffice for most of their applications. They wouldn't have to deal with the drawbacks of Ethereum's very strict threat model.

I'm curious what applications they have in mind, or if they maybe just participate to get in on the hype and explore their options.

4
Veelox 1 day ago 3 replies      
It seems like Ethereum is getting a lot more industry support than Bitcoin ever did.

If Ethereum continues it looks like it could kill off Bitcon, the looming possibility of a hard fork might be contributing make that happen really soon.

5
Animats 1 day ago 1 reply      
Where are the press releases from Master Card, Cisco, and Scotiabank about this? Press releases of "big company partners with little company", coming from the little company, are always suspicious. It may just mean "they signed up for our mailing list".
6
Torai 1 day ago 5 replies      
> The EEA describes itself as a standards group designed to help enterprises build their own interoperable technology, mostly using private versions of the ethereum blockchain.

Ether price soaring right now, but isn't Master Card's bussiness direct competition of the public Ethereum blockchain as a payment system?

7
otto_ortega 1 day ago 1 reply      
Based on this, it seems to me that financial corporations didn't want to take the risk of decentralized crypto-currencies making them obsolete, so they decided to create their own crypto-currency, one they can control...
8
52-6F-62 1 day ago 3 replies      
I heard the head of R&D (of Scotiabank) talk at a conference a little over a month ago and at the time he made it seem like Scotiabank had experimented with existing cryptocurrencies and blockchain tech and were disatisfied and thought it would be some time before the technology was approachable for their bank. I guess they were just playing coy.
9
CalChris 1 day ago 1 reply      
I suppose the Enterprise Ethereum Alliance should be contrasted with IBM and friends Hyperledger. These are a couple of well backed consortia which from 30,000 feet are similar.

https://www.hyperledger.org/

10
Abishek_Muthian 1 day ago 0 replies      
Title didn't mention the significant partner showcased in the website - 'Andhra Pradesh Government'. The first state govt (India) outside the US to join the alliance. AP was recently divided into separate state (Telengana) and lost it's capital (Hyderabad- home to top IT MNC's) to it. They are actively investing in Fintech, including a new institute for blockchain - http://www.apeita.in/blockchain/
11
evanvanness 1 day ago 0 replies      
If you'd like an easy way to keep up to date on Ethereum: http://www.weekinethereum.com
12
spocklivelong 1 day ago 1 reply      
I'm really surprised at seeing Govt. of Andhra Pradesh here. Curious how they got here in the first place.
13
lpasselin 1 day ago 8 replies      
What are the advantages Ethereum has over Bitcoin, except faster transactions?
14
45h34jh53k4j 1 day ago 3 replies      
If you are into the EEA, you should check it out its cousin, the Monero Enterprise Alliance: https://mea.business/
15
whazor 1 day ago 0 replies      
Ethereum itself can be used as a financial language, which can be quite revolutionary for the banking industry. Instead of using the proof of work consensus, banks could have their own consensus method. Which does allow for the order of scale that the real world needs.
16
33W 1 day ago 1 reply      
I see the mastercard logo, but not included in the list.
17
cmurf 1 day ago 1 reply      
Andhra Pradesh, one of the four who have joined this alliance in the announcement, is a state of India. Visakhapatnam is its largest city, with just about 50 million people, more than the state of California.
18
ComodoHacker 1 day ago 0 replies      
No mention of Master Card. Mods please edit the title.
19
wideem 1 day ago 5 replies      
Ethereum as a currency will never go mainstream with a price swings like this
20
iosDrone 1 day ago 0 replies      
To the moon!
21
jdlyga 1 day ago 2 replies      
It's too bad that Etherium prices are down so much right now from a few weeks ago.
9
Why Should I Start a Startup? ycombinator.com
487 points by craigcannon  21 hours ago   244 comments top 42
1
Mz 18 hours ago 9 replies      
He sounds twice exceptional -- gifted, but with some sort of learning disability or handicap of some sort.

Twice exceptional people often appear to be "average" but find an "average" or normal life enormously frustrating. Because of having some sort of handicap, it can take a lot to get them going. Once they get going, they often outperform others. Life is vastly better for them when they can create their own niche because they simply do not fit in to normal societal expectations very well, try though they might.

No insult intended. I fit that profile, as do both my sons.

So, perhaps a good summary is if you are bright, but having trouble finding your niche, then starting your own company is a means to create your own niche. This is exactly what I have always told my oldest son. From an early age, it was clear to me he would not make a good employee. But that doesn't mean he won't eventually be successful. He just needs to grow his own.

2
eriktrautman 35 minutes ago 0 replies      
A lot of replies are focusing on the outcomes implied by the OP's checklist -- if you answer "yes" on all, you'll be more likely to succeed financially. As a founder, I think it's far more important to focus on the journey aspects of it. Meaning, if you check the boxes, just maybe you have the mindset that will make you happiest when you're in the midst of all that hustling and struggling in the game. That mindset is what keeps you in it... if you're reward-focused, then you're going to burn out and lose direction.

So let's reframe the discussion from "do you have what it takes to be successful in a startup?" to "do you have what it takes to love the sufferfest grind of each day along the way so the outcome is just a nice reward at the end?"

3
andkon 14 hours ago 3 replies      
I'd like to fill out the "maybe" part of his "If you answer yes three times, then maybe starting a startup is for you" bit.

I answer a very strong yes to all three of those. Always have. But a startup, despite my always having worked in them and run one of my own for three years, was resolutely not a good idea for me.

It took a lot of self-discovery to see myself not as an entrepreneur, but as an artist, and it's frustrating, because it sure took me a while to admit it. I spent all my time and money and effort trying to be successful as a startup founder, and expecting that I'd eventually feel the spark that I know can drive me. But now I look at the creative work I do on my own, even though I still have a full-time job, and I feel the same connection that Michael describes here, which I had chased for so long.

The irony is that pretty much every idea I have uses the technical skills i built as a startup founder. But hey. Time to do what I always should have.

4
nathan-wailes 19 hours ago 3 replies      
Great post! I love reading pre-company autobiographical accounts from successful founders, as I find it to be a great way to get ideas for what pre-company experiences might best prepare me to be successful myself.

Some additions I would make to Michael's list for "Why to start a start-up":

- As a founder you get a level of control over what problem you're solving that you'll rarely have as an employee.

- You also get a level of control over who you're working with that you'll rarely have as an employee.

- You also escape having your income determined by the market for labor, and instead have it determined by the market for your product.

Those three things are very, very important to me, and I suspect also to many other founders.

5
agentultra 19 hours ago 8 replies      
> 1. The vast majority of startups are not successful

This alone is why 90% of people will not choose to work at a startup. You will work long hours, for crap pay, and you'll be waiting in line if there is an exit.

The odds of there being an exit worth anything to anyone other than a founder are small enough to not even worth considering.

If you are a founder you're gambling on your chances. There are ways to mitigate the risk but there's no sure thing.

Don't start a startup if you do not have the financial security to basically lose everything you put in.

Don't start a startup if you have family that depends on your income. You could choose to eat ramen and sleep on the floor of a college dorm room. Your kids (and CPS) might not appreciate it.

I agree the motivation is very important. I disagree that you cannot find the same motivation in a more stable organization (or can't motivate yourself). I recently finished reading, Extreme Ownership [0], and I bring that with me to work. People need to be responsible for outcomes: that's not unique to startups. You can also find that motivation internally and share it with your colleagues as you go.

[0] https://www.amazon.ca/Extreme-Ownership-U-S-Navy-SEALs-ebook...

6
falcolas 19 hours ago 2 replies      
So, does "personal responsibility for [...] failure" really apply when there is a 90% failure rate? Seems like short of failing to execute, it's more a game of chance than something you can influence.

That is, it's seems like its a lot like video poker: you can cause yourself to lose, but you can't make yourself win.

I would also think that getting into an industry with such a high failure rate would be terrible for someone who takes such personal responsibility for the outcome: most of the time they're going to consider themselves a failure. In the worst case, serial failures.

7
seddona 15 hours ago 0 replies      
"there is a certain type of person who only works at their peak capacity when there is no predictable path to follow, the odds of success are low, and they have to take personal responsibility for failure (the opposite of most jobs at a large company)."

I have never really stopped to consider my own motivations for doing a startup but this sums it up. Do a startup because you have to, not because you want to.

8
nurettin 8 hours ago 0 replies      
Why create a startup?

Because you have a product and the development and marketing skills to persuade people to keep on using it. It's easy to get sidetracked with colorful terminology and complex definitions.

9
gravyboat 9 hours ago 0 replies      
I would rather just run small businesses and side projects. Still a pretty high chance of failing, but if they do fail you still learn and it doesn't impact your bottom line that badly.
10
jaypaulynice 17 hours ago 2 replies      
I like this post. I wish I really started a startup in college or right after. A regular corporate job turns you into a zombie and you become out of touch with reality especially if you're well paid and well fed. A lot of people are unprepared for the hard life.

If anything, you'll learn more about life in 6 months than you will in decades. It's not just the technical stuff. You'll learn who your real friends are. Being broke and thinking you might go homeless will make you sympathize with the homeless. Co-founders will try to screw you. If you're married, you'll soon find out if your spouse really loves you or just the regular income. Fake people pop up everywhere trying to take credit for your hard work.

I think a startup is akin to the street life without the street creds.

11
TheAlchemist 17 hours ago 0 replies      
Very interesting post ! The 3 questions really resonate with me. Usually when talking startups, people talk about changing the world, having and impact etc. Your take seems much more personal, and also that's the way I feel it (although I can't explain that to me rationally).

The 3rd is so true - in big corps, you think you're responsible for this and that because you're the tech lead, project manager or whatever. But most of the time, that's not really true - nothing bad will happen to you if you don't deliver. When you're starting a startup you suddenly realize what personal responsibility means.

Anyway, thanks for a great post !

12
mwseibel 20 hours ago 7 replies      
Hey folks - happy to answer questions here
13
jcroll 2 hours ago 1 reply      
> I cannot promise that doing a tech startup will make you rich (in fact the odds are against you becoming rich)

Ok, so what path has the least resistance for getting rich for a skilled developer?

14
elmar 5 hours ago 0 replies      
Why Most Startups Fail and How To Avoid The Same Fate

http://www.brianhonigman.com/why-startups-fail/

15
nnd 17 hours ago 1 reply      
> Do I seek the hard challenges that most people shy away from?

Arguably, you can find much harder challenges working for a bigco, at least when it comes to engineering challenges.

Also, surprisingly one of the biggest factors involved when it comes to starting a startup is not mentioned: idea itself.

16
quadcore 6 hours ago 0 replies      
I like it. Though I don't think that's the answer I would give. From my experience, I wanted to start a startup because it was like god himself gave me a mission. And I couldn't turn away from that responsibility. I fucking deeply cared about what I was gonna do. It was so much more important than everything else for me. I was freaking Frodo bringing the ring to mordor.

So yeah, everything he said, plus the quest.

17
tristanho 20 hours ago 2 replies      
Loved this post, thanks! You mentioned the 3 constraints under which you (and many founders) thrive in:

* Being the underdog

* Hard challenges most people shy away from

* Personal responsibility for outcomes

Why are some people best under those conditions? Is it something trainable, or inherent?

18
abhikandoi2000 19 hours ago 0 replies      
Should one also do a startup, when a startup seems to be the only possible path to "see something happen"?
19
ashwinaj 18 hours ago 0 replies      
> I cannot promise that doing a tech startup will make you rich (in fact the odds are against you becoming rich), but I can promise that it is one of the most challenging things you can choose to do. It will push you past your limits, force you to learn faster, and maybe show you that once in a while the impossible is possible.

Couldn't have said it better! Bookmarked, thanks for explaining it in simple terms. I find myself fumbling to explain this to my non-startup friends.

20
taurath 17 hours ago 6 replies      
I do find it rather hilarious that he dropped interest in yale for being too academic, but during my (failed) interview as a web developer at Twitch about 5 years ago, the questioning had to do with only academic subjects like graph traversal algorithms - they proudly stated when I applied that 90% of their employees come from an Ivy league background.

Here's who I think should start a startup:

- People with the backing of their families and resources to get into Yale and every other college other than Harvard (thank god!), and have the means that failure is not so much of a risk.

- People who really need to do this thing, and can maintain their passion and drive and make it the largest part of their lives. And if that thing requires a lot of money to do, the ability to raise money, or already have money.

Both of the above need to be able to physically and mentally handle the 90% failure chance. The author clearly could.

21
raresp 4 hours ago 0 replies      
If you're asking yourself the question "Why Should I Start a Startup?" than you sholdn't start a startup.
22
polpenn 13 hours ago 1 reply      
Okay! Let's do it! I want to start a start-up. What do I do?
23
sophiamstg 6 hours ago 0 replies      
Well, this is so motivating for a specific population, but I don't think majority thinks this way... after all paying bills come first..
24
tanilama 18 hours ago 3 replies      
The startup starts up because of money is often time uninspiring. It is a good thing that we have fewer startups like that. If you don't have a good problem, plz don't make a startup for it.
25
sna1l 17 hours ago 0 replies      
"About 2 months after being kicked out of school I suddenly felt pissed off again. I realized that my school, some of my friends, and even some of my family members thought I would never graduate from college. My motivation came back instantly." -- this passage seemed kinda weird to me. Seems like he got his motivation back due to spite?

His 3 questions at the end also seem to point to him having a chip on his shoulder.

26
yousifa 19 hours ago 0 replies      
What does your data show regarding success rate for people like you vs those who build companies because of the need to have something exist?

What are specific challenges you see in each case?

27
jokoon 8 hours ago 0 replies      
Didn't bill gates say something like "startups are needed, because you need failure so that some startups can succeed".
28
blizkreeg 18 hours ago 1 reply      
You should not start a startup. I like to think of it differently. It's semantics but I find that it aligns my thinking better. You should start a business if:

- you are strongly driven _by the idea_ of doing business, i.e., at some point in your evolution, you become Walt Whitman. A business, which

- solves a problem, makes a product, or offers a service, that

- you think can you do better and make money while doing so, and

- you would rather face the uncertainty of it not working out than be employed working on a thing you are not motivated by.

You should not start a startup if things such as exits, raising VC, optics of running a startup, or some such affair is of primary concern to you. They should be in the service of running the business and serving its needs, not the other way around.

This is not a script that applies to just some lifestyle, sustainable businesses, in the parlance of SV. This is the same script that even the largest, most profitable, and most impactful companies (cue the disruption adjectives) on the planet have followed.

29
throw4141 17 hours ago 2 replies      
I've tried to start a startup, 2 times. I failed both times.

And now I have 0 money left, the family that cannot help me, I don't have a job and can't find one and thankfully I have some friends that are giving me a place to sleep.

Should I start another startup? I wish. Will I do it? I don't know how.

I wish I had a magic idea and some people to invest in it, but not living in the Valley and not having a clue on how to find great ideas I'm kinda stuck.

I'm already looking for a job, the problem is that I have the fear that the more I wait for the next startup and the more it'll be harder to build a product from scratch because the expectation from all markets will be higher.

Ideas?

30
_ao789 7 hours ago 0 replies      
Fucking great post man. Really hits home.
31
rahimshamsy 19 hours ago 1 reply      
Is the desire to transform as a person a good enough reason? Very few endeavors are as challenging as solving a problem by building a profitable business around it, and having the pleasure of meeting and working with a variety of people.
32
jondubois 16 hours ago 0 replies      
Building a successful tech startup is easy but extremely unlikely.
33
keeptrying 19 hours ago 0 replies      
You shouldn't .
34
crb002 17 hours ago 0 replies      
Garage. It worked for Apple, Google, and HP.
35
idlewords 18 hours ago 1 reply      
The casino explains why you should play dice.
36
gaius 18 hours ago 0 replies      
The only way to win is not to play
37
known 8 hours ago 0 replies      
When you can/will dismantle a "Pyramid scheme"
38
graycat 16 hours ago 0 replies      
For someone considering a startup, again, once again, yet again, over again, one more time, the probability of being successful in a startup, say, estimated from all the startup efforts, is at best useless and otherwise misleading and, thus, worse.

Instead, what matters for an estimate is the conditional probability of success given other information. It's quite possible for the probability of success to be quite low but the conditional probability of success, given other facts, events, situations, etc. that hold, quite high.

The OP's nice words about working in a big company are also deceptive. In fact, especially in technology, those jobs can be darned unstable, vulnerable, and short lived. If by then have a house mortgage, three kids on the way to college, maybe some kids who need some help like the author of the OP, getting fired at a big company can be one heck of a bummer.

So, ASAP, while young and making good bucks in tech, look all the time for a startup opportunity. Maybe get one going in your house den, basement attic, garage, wherever. Pay attention to, learn about, at least dip toes into startups. Start small, don't invest more than pocket change, and LEARN.

Then think of your kids: What are they going to do? Are they going to have to start all over, from zero, like the author of the OP did? Or, pay attention to a family with a good, stable family business (e.g., with a geographical barrier to entry and a very broad customer list), hopefully not much to do with anything unstable like tech, where the kids can learn the business and, thus, at least learn about business and, hopefully, later move into the family business. For having a good life, that can beat the heck out of anything can learn in political science at Yale.

With high irony, my experience is that the families that can afford to pay full tuition, room, and board at an Ivy League college didn't go to an Ivy League college and, instead, made their money in small-medium business -- ran 10 McDonald's well, was the leading plumbing supply house for a radius of 50 miles, owned a lot of rental property, had a really good independent insurance agency going where they knew nearly everyone in town relevant to the business, etc.

If want a child in law or medicine, likely it helps a lot for at least one parent to be in law or medicine.

Startup? Really, in the US, for a full career, there really isn't much alternative but to do a successful startup. The question is not whether but how. So, learn how. If really know how and are determined, then the chances of success should be good, not bad.

39
desireco42 18 hours ago 0 replies      
Because I can. Also, bootstrap if at all possible. Independence can't be valued enough.
40
rjurney 18 hours ago 0 replies      
You shouldn't. Work in open source instead. If that project takes off, start a company. This way you benefit everyone instead of just yourself if you fail. Which you probably will.
41
CryptoFascist 18 hours ago 8 replies      
This is nothing but a pitch by YCombinator to get rubes to work for pitiful amounts of money and likely fail, so that YCombinator can capture the vast profits from the few that succeed. It's YCombinator's business model.
42
carsongross 17 hours ago 0 replies      
Here is a practical reason to start a startup:

You are married to someone who makes enough money to push your marginal tax rate on income to the point (it can hit 55%+ in places like CA) that you might as well take a flier on a startup rather than pulling a W4.

10
Cybersecurity Humble Book Bundle humblebundle.com
439 points by ranit  2 days ago   83 comments top 14
1
dsacco 2 days ago 9 replies      
So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.

_________________

Do Read:

1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.

2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.

3. Security Engineering - You can probably read either this orThe Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.

4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.

5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.

_________________

You Can Skip:

1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.

2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.

3. The Art of Deception - See above for Social Engineering.

4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.

_________________

What's Not Listed That You Should Consider:

1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.

2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.

3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.

4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.

5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.

6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.

2
EnFinlay 2 days ago 5 replies      
Is there a legal / not crazy expensive way to buy humble bundle books and get them printed on standard 8.5x11, bound in a series of binders / duotangs / twine? I'm going to buy the bundle, but greatly prefer physical pages to reading on a screen.
3
Tepix 2 days ago 4 replies      
I use 2FA on Humble Bundle. In order to log in, I have to solve several captchas.I then have to solve more to buy stuff.

All in all I have to solve the captcha 5 times or so, each time involves marking multiple images.

What sense does this make?

Either they trust the captchas (then they only need one), or they don't (then they should remove them). I've complained about this to them in the past but they haven't changed it.

4
mr_overalls 2 days ago 2 replies      
Schneier's "Applied Cryptography" by itself justifies the $15 bundle, IMHO. This is a great deal.
5
dronemallone 2 days ago 1 reply      
Security Engineering is free on the author's website :) http://www.cl.cam.ac.uk/~rja14/book.html
6
kirian 2 days ago 0 replies      
I find this ironic this offering - "Bitcoin payments have been disabled for the Humble Book Bundle"
7
twoquestions 2 days ago 0 replies      
Great, now there's another collection of books which I'll want to read which I'll feel bad about missing the deal for, then kick myself for never actually reading them in-depth.

I think I've bought 50 books from Humble Bundle (spending about $1/book), but I've only cracked open a few of them.

Also thank you dsacco for the recommendations!

8
znpy 2 days ago 0 replies      
Remember to choose a charity entity for your donation!

ProTip: entities like the FSF, the EFF, Wikimedia and many others can be helped via the humble bundle!!

9
b100w11 1 day ago 0 replies      
Ironically the Malware Analysts cookbook epub seems to be infected by a trojan.And the Web application hacker handbook also in epub by another one
10
_coldfire 2 days ago 0 replies      
To download all books at once: https://gist.github.com/graymouser/a33fbb75f94f08af7e36

Improved *nix version further down the thread

Change "MOBI" to "PDF"/"EPUB" if desired

11
nonamechicken 2 days ago 5 replies      
I am interested in learning more about securing web servers (nginx, nodejs). Is there a book in this bundle that could help me? If you know any good books, please recommend me one.
12
komali2 2 days ago 0 replies      
Fantastic, glad to have more reading to prep for defcon!
13
SadWebDeveloper 2 days ago 1 reply      
CEH v9 at 15 USD bundle-level is quite a joke, IMHO that should go to the 1 USD level but anyway as someone said Applied Cryptography might be the selling point here.

Personally speaking the only books valuable in this bundle are "Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation" and "Applied Cryptography: Protocols, Algorithms and Source Code in C, 20th Anniversary Edition" the other are either quite outdated, too oversimplified or script-kiddie level stuff.

14
gergles 2 days ago 1 reply      
"Pay what you want^"

^As long as it's at least $15.

It bothers me that Humble Bundle has so heavily embraced this type of marketing.

11
Terminal and shell performance danluu.com
396 points by darwhy  1 day ago   202 comments top 29
1
gnachman 1 day ago 11 replies      
iTerm2 author here.

I'll spend some time looking into iTerm2's latency. I'm sure there are some low-hanging fruit here. But there have also been a handful of complaints that latency was too lowwhen you hit return at the shell prompt, the next frame drawn should include the next shell prompt, not the cursor on the next line before the new shell prompt has been read. So it's tricky to get right, especially considering how slow macOS's text drawing is.

If I could draw a whole frame in a reasonable amount of time, this problem would be much easier! But I can't. Using Core Text, it can easily take over 150ms to draw a single frame for a 4k display on a 2015 macbook pro. The deprecated core graphics API is significantly faster, but it does a not-so-great job at anything but ASCII text, doesn't support ligatures, etc.

Using layers helps on some machines and hurts on others. You also lose the ability to blur the contents behind the window, which is very popular. It also introduces a lot of bugslayers on macOS are not as fully baked as they are on iOS. So this doesn't seem like a productive avenue.

How is Terminal.app as fast as it is? I don't know for sure. I do know that they ditched NSScrollView. They glued some NSScrollers onto a custom NSView subclass and (presumably) copy-pasted a bunch of scrolling inertia logic into their own code. AFAICT that's the main difference between Terminal and iTerm2, but it's just not feasible for a third-party developer to do.

2
nneonneo 1 day ago 6 replies      
I think one thing that this really points out is just how much care Apple has poured into Terminal.app. It's very good, and every time I have to use another terminal application (ugh conhost.exe) I am reminded of this. It's got a bunch of really thoughtful little features (showing what processes are attached to the current pty, option-clicking to move the cursor rapidly, full mouse support for apps like vim, good linewrap detection, and recently support for rich-text copy/paste which is useful for showing coloured terminal output, etc. etc.), and it remains really fast and snappy despite these features.

On a related note, I am big into latency analysis and driving down latency in interactive systems. I'm quite familiar with the touchscreen work cited at the top, and having played with the system I can attest that <1ms latency feels actually magical. At that level, it really doesn't feel like any touchscreen you've ever used - it genuinely feels like a physical object you're dragging around (the first demo of the system only let you drag a little rectangle around a projected screen). It's amazing what they had to do to get the latency down - a custom DLP projector with hacked firmware that could only display a little square at a specified position at thousands of FPS, a custom touchscreen controller, and a direct line between the two. No OS, no windowing system, nada. After seeing that demo, I can't help but believe that latency is the one thing that will make or break virtual reality - the one thing that separates "virtual" from "reality". I want to build a demo someday that does the latency trick in VR - a custom rig that displays ultra-simple geometry that has sub-millisecond latency to human head movement. I will bet that even simple geometry will feel more realistic than the most complex scene at 90 FPS.

3
jwilm 1 day ago 2 replies      
We can do better in Alacritty. For those interested, I've filed a bug on our issue tracker about where this latency is coming from and what can be done: https://github.com/jwilm/alacritty/issues/673

At the end of the day, there is a trade off to be made. Terminals (or any program, really) can have 1-frame input latency (typically 1/60sec) and give up v-sync and tearing results, or they can have a worst-case 2-frame input latency with v-sync, and then you're looking at 2/60sec or ~32ms.

4
mikejmoffitt 1 day ago 9 replies      
Interesting results. I have loved XTerm for a long time because it "felt snappy". On MacOS I've always preferred Terminal.app to the often recommended iTerm2 for similar reasons.

I think it's funny to have the suckless project page for st go on and on about how XTerm is clunky and old and unmaintainable, but the result of this small and clean minimalist terminal is a closer loser in terminal performance, which subconsciously and consciously detracts from the experience.

5
joosters 1 day ago 4 replies      
the most common terminal benchmark I see cited (by at least two orders of magnitude) is the rate at which a terminal can display output, often measured by running cat on a large file. This is pretty much as useless a benchmark as I can think of

It's a really helpful benchmark, IMO, as it's the main problem I see with different terminals. On a chromebook, most SSH clients are effectively useless because if you accidentally run a command that prints a lot of output (even just 'dmesg'), the terminal locks up for a huge amount of time, seconds or even minutes. You can't even interrupt the output quickly.

I appreciate that it's a different problem to the latency that the OP is trying to measure, but as a benchmark, it's actually very useful.

6
def- 1 day ago 3 replies      
Was curious and tried this out a bit on Linux+X11 on an i7 6700k with the igpu:

 stdout [MB/s] idle 50 [ms] urxvt 34.9 19.8 xterm 2.2 1.9 rxvt 4.3 7.0 aterm 6.0 7.0 konsole 13.1 13.0 note: stops moving when printing large file terminator 9.1 29.4 note: stops moving when printing large file st 23.0 11.2 alacritty 45.5 15.5

7
Tyriar 1 day ago 0 replies      
I work on the VS Code terminal/xterm.js[1].

Hyper which currently uses a fork of hterm, is in the process of moving over to xterm.js due to the feature/performance improvements we've made over the past 12 months. Hyper's 100% CPU/crash issue[2] for example should be fixed through some clever management of the buffer and minimizing changing the DOM when the viewport will completely change on the next frame.

I'd love to see the same set of tests on Hyper after they adopt xterm.js and/or on VS Code's terminal.

Related: I'm currently in the process of reducing xterm.js' memory consumption[3] in order to support truecolor without a big memory hit.

[1]: https://github.com/sourcelair/xterm.js[2]: https://github.com/zeit/hyper/issues/94 [3]: https://github.com/sourcelair/xterm.js/issues/791

8
chubot 1 day ago 6 replies      
If you're sensitive to latency and run Linux, try hitting Ctrl-Alt-F1, and do a little work in console mode at the terminal. (Ctrl-Alt-F7 to get back.)

For me this is a great illustration of how much latency there is in the GUI. Not sure if everyone can feel it, but to me console mode is much more immediate and less "stuffy".

9
tomsmeding 1 day ago 1 reply      
> even the three year old hand-me-down laptop Im using has 16GB of RAM

> on my old and now quite low-end laptop

Trust me, that's not a low-end laptop. Either that has the shittiest cpu ever and a terribly mismatched amount of memory, or the author's view of that is high-end or low-end is skewed; in either case, what's low-end nowadays would be 4GB RAM. 16GB is LOTS, useful for developers that run large builds and/or VM's regularly.

I very much like the rest of the article though, would love to see some latency improvements here and there!

10
dahart 1 day ago 3 replies      
I love the analysis of terminal latencies! And I'm in full agreement with the overall goal of less latency everywhere. But, of course, I feel like picking a few nits.

> And it turns out that when extra latency is A/B tested, people can and do notice latency in the range were discussing here.

Yes, this is true. But the methodology is important, and the test used doesn't really apply to typing in terminals. The test isn't a "type and see if you can tell it's slow" test, it's a hit the mark hand-eye coordination test, something you don't do when typing text. Latency when playing Guitar Hero is super duper important, way more important than most other games, which is why they have a latency calibrator right in the game. Latency when playing a Zelda game is a lot less important, but they still try very hard to reduce latency.

The same people who can distinguish between 2ms of difference in a drum beat also can't distinguish between an extra 30ms of response time when they click a button in a dialog box.

I'd like to see a stronger justification for why lower latency in a terminal is just as important as it is for hand-eye coordination tasks in games.

 ~2 msec (mouse) 8 msec (average time we wait for the input to be processed by the game) 16.6 (game simulation) 16.6 (rendering code) 16.6 (GPU is rendering the previous frame, current frame is cached) 16.6 (GPU rendering) 8 (average for missing the vsync) 16.6 (frame caching inside of the display) 16.6 (redrawing the frame) 5 (pixel switching)
I find this list pretty strange. It's generally right - there are a bunch of sources of latency. But having done optimization for game consoles for a decade, this explanation of game latency feels kinda weird.

Games that actually run at 60fps usually do not have greater than 100ms latency. They also don't miss vsync every other frame on average, that 8ms thrown in there looks bizarre to me. Render code and GPU rendering are normally the same thing. Both current and previous frame GPU rendering is listed, huh? Sim & render code run in parallel, not serially. The author even said that in his article, but lists them separately... ?

Consumer TVs come with like 50ms of latency by default. That's often half of it right there. Games are often triple-buffered too, that accounts for some of it. The ~2ms right at the top belongs in the 8ms wait, it disappears completely.

I just get the feeling the author of this list was trying hard to pad the numbers to make his point, it feels a like a very hand-wavy analysis masquerading as a proper accounting of latency.

11
chubot 1 day ago 0 replies      
If you care about end-to-end latency, I highly recommend this talk by John Carmack:

https://www.youtube.com/watch?v=lHLpKzUxjGk

This talk blew my mind and made me feel like a terrible engineer. He's talking about end-to-end latency in VR, which actually has a commercial motivation because VR products with high latency will make you sick. (this obviously doesn't happen with shells and terminals!)

He's talking about the buffering and filtering at every step of the way. And it's not just software -- sensors have their own controllers to do filtering, which requires buffering, before your OS kernel can even SEE the the first byte, let alone user space. On the other side, display devices and audio output devices also do nontrivial processing after you've sent them your data.

It's an integrated hardware/software problem and Carmack is great at explaining it! It's a dense, long-ish talk, but worth it.

12
victorhooi 3 hours ago 1 reply      
I want to try Alacritty on OSX - but the big turnoff for me is the lack of binaries.

However, I don't know if that's intentional, because they don't think it's ready yet for people who won't install/compile the whole Rust stack from scratch?

13
adekok 1 day ago 2 replies      
Latency is why I could never use a WYSIWYG word processor. While I don't like VI that much, it's latency is low enough that it's not a problem. i.e. I press a key, and miracle of miracles, the character appears on the screen.

Using a WYSIWYG word processor, there's enough latency between keypress and visual update that I find them impossible to use.

When Apple came out with Pages, it was apparent that they paid strong attention to latency. That means the latency is small enough that (for me) using it isn't an exercise in frustration.

14
diggan 1 day ago 3 replies      
Fun test to check how fast the terminal can handle loads of text, run "time head -c 1000000 /dev/urandom"

On a MacBookPro11,1 - 2,8 GHz this shows:

iTerm2: 2.182 total

iTerm2 with tmux: 0.860 total

terminal.app: 0.135 total

terminal.app with tmux: 0.910 total

Surprisingly, iTerm2 is faster with tmux than terminal.app is with tmux. But terminal.app without tmux is the fastest.

Anyone knows why the performance with tmux is so different between the both terminals?

15
mrob 1 day ago 0 replies      
The common LibVTE-based terminals have problems with latency because they're deliberately capped at 40fps. Xterm doesn't have this problem.

Gedit gained this problem when it switched to GTK3. The forced animation for scrolling doesn't help. Mousepad still has low latency, at least in the version included with Debian Unstable, but I worry that port of XFCE to GTK3 will make it as bad as GNOME.

16
chillee 1 day ago 0 replies      
One other thing to note is that compositors seem to add a fairly large amount of latency. I ran the app linked in the "Typing with Pleasure" post and I saw a roughly ~20ms improvement across various text editors with the compositor turned off (I'm using Termite with Compton as my compositor).

http://imgur.com/0G3qbpr

17
cat199 1 day ago 2 replies      
So, on the other side, anyone want to build a true 'terminal emulator' that has baud-speed emulation?

top just doesn't look the same without the changes trickling down the screen, matrix like..

Thankfully I can run GlassTTY font connected to the KVM serial console for a near approximation.. but it's still too fast :)

Grew up in the VC/GUI transition era, but buying a vt220 and running it at 19200 on a used Vax taught me a Zen of command line that nothing else could... Not only did you have to think about what the command would do, but also how much text it would display, and whether you'd need to reboot the terminal after it got hosed up...

18
iClaudiusX 1 day ago 0 replies      
I found this note in the appendix interesting with respect to why we don't seem to notice this latency

> Terminals were fullscreened before running tests. This affects test results, and resizing the terminal windows can and does significantly change performance (e.g., its possible to get hyper to be slower than iterm2 by changing the window size while holding everything else constant).

Perhaps we don't notice because it's so much lower at window sizes much less than fullscreen?

19
cannam 1 day ago 1 reply      
Interesting article, but I don't quite get it.

I'd imagine that terminal users will often be looking at the last bit of output as they type, and hardly looking at the thing they're typing at all (one glance before hitting Return). They aren't going to notice a bit of latency. And terminals are often used to communicate over networks that introduce a lot more latency than any of the measurements here.

I think, for me, this is a bit like the sales pitch for Alacritty -- fastest terminal ever as long as you don't mind that it doesn't scroll. Someone is using their terminal very differently from the way I use mine.

20
wallstquant 1 day ago 0 replies      
It would also be great to see how these scale with terminal size. I personally use iterm2 but after switching to a new Macbook pro this year with two 5k displays it's noticeably slower. I'm assuming some O(n^2) scaling behind the scenes but I haven't measured anything myself. Still, @gnachman I love your term especially with terminalplots.jl and drawing Julia repl images inline.
21
chubot 1 day ago 0 replies      
How did he actually measure the latency in this article?

Doesn't measuring keypresses to display latency require special hardware?

All tests were done on a dual core 2.6GHz 13 Mid-2014 Macbook pro. The machine has 16GB of RAM and a 2560x1600 screen. The OS X version was 10.12.5. Some tests were done in Linux (Lubuntu 16.04) to get a comparison between macOS and Linux. 10k keypresses were for each latency measurements.

Latency measurements were done with the . key and throughput was done with default base32 output, which is all plain ASCII text. This is significant. For example, terminal.app appears to slow down when outputting non-latin unicode characters.

22
portlander12345 1 day ago 5 replies      
Slightly off-topic, but this reminded me of a mystery: Does anyone else experience that bash takes a long and variable time to start, on many systems and without any fancy setup of any kind? What can a shell be doing that it takes three or four seconds to start?
23
wmf 1 day ago 0 replies      
I guess this isn't measuring end-to-end latency which would be discretized in units of frames and would have a constant overhead from the display pipeline. I wonder if the differences between terminals would look much smaller if measured that way.
24
chuckdries 1 day ago 1 reply      
I'm surprised Hyper did as well as it did on the latency test, after all it does run in Electron, which I'd expect to add a lot of overhead between keypress and text display
25
gue5t 1 day ago 4 replies      
It'd be cool if someone would replicate this on Linux under X11 and a few Wayland compositors, and throw urxvt, rxvt, xterm, aterm, and some others in the mix.
26
darklajid 1 day ago 1 reply      
> even the three year old hand-me-down laptop Im using has 16GB of RAM

Oh my.. I wish that was the case. Even current development machines tend to have just 8 around here..

27
mrbill 1 day ago 1 reply      
I wonder how much performance hit font antialiasing in iterm2 causes, or if it was turned on during these tests.
28
aorth 1 day ago 1 reply      
Interesting to see that both alacritty and Terminal.app are very fast but running tmux inside them kills performance.
29
tome 1 day ago 2 replies      
How can st be so slow? It's tiny and hardly does anything!
12
Firefox marketshare revisited andreasgal.com
369 points by ronjouch  23 hours ago   457 comments top 69
1
JohnTHaller 22 hours ago 9 replies      
One additional cause of new Chrome installs taking over from Firefox: bundleware. Chrome is foisted upon users as install-by-default bundleware when users install or update lots of different apps, especially free antivirus apps on Windows. Just clicking "Continue" when your free antivirus on Windows updates will cause Chrome to be installed and set as the default browser. Here's an image of Avast tricking you into installing Chrome: http://imgur.com/hNZLbmL

I've had to fix this for three family members previously as they were using a free antivirus and couldn't figure out why their browser looked different and didn't have an ad-blocker now.

2
epoch1970 21 hours ago 18 replies      
I think the "Why?" section's conclusions are off the mark. It basically blames Google's advertising of Chrome for Firefox's decline, and even goes so far as to say "Firefoxs decline is not an engineering problem."

While I don't doubt that Google's advertising of Chrome has drawn away some Firefox users, I also don't think that we can ignore or deny the many controversial changes to Firefox that have likely had an impact, too.

Just off of the top of my head I can think of things like:

* Frequent breakage of extensions when first switching to the more rapid release schedule.

* Frequent and disruptive UI changes that didn't bring users much benefit, such as Australis.

* Removing the ability to easily disable JavaScript.

* Taking many years to get multiprocess support working. (Not that I'm suggesting they should have rushed it, of course.)

* The inclusion of Pocket and Hello.

* Sponsored tiles.

* Users who report experiencing poor performance and high memory usage.

* Disruption caused by requiring signed extensions.

* The removal of support for OSes or OS releases that are moderately older, but still do have active users.

I'm sure there are others that I'm forgetting.

Even if they seem minor, those are the kinds of things that can cause users to switch away from Firefox, or not even start using it in the first place. Losing a small number of users for a variety of minor reasons can add up very quickly, as well. Furthermore, those issues don't really have anything to do with Google or Chrome.

3
osoba 2 hours ago 1 reply      
Maybe this is a good opportunity for Firefox to abandon its "forced mediocrity" model.

The vanilla installation of Firefox lacks basic UI components (mouse gestures for example), lacks session management, and the bookmark and history interfaces look like they were made in 1995.

When you click an old entry in History I don't understand why it's so difficult for the selection to stay near the formerly clicked item, instead of it selecting the top most entry forcing you to scroll all the way down again if you want to open another entry that's near the previously clicked entry.

Why can't Bookmarks employ a simple logistic classifier? OK I've stopped using Firefox's bookmark system a long time ago (because its so shitty) but if I were to be still using it I would expect the browser to be smart enough to figure out that if all my bookmarks from a certain site are in a specific bookmark folder that most likely means this new bookmark from that same site should go there and should be offered as the 1st choice.

Now, yes, of course you can add all these features in a slow JavaScript-based addon which will eat your memory and cpu time and allow the Firefox team to blame the addons when something goes wrong with Firefox, but at some point you have to reconsider if this is such a good idea.

Sure very few people use mouse gestures in Firefox and adding them out of the box could be interpreted as bloat, but maybe if more users even knew what mouse gestures were and how useful they are, they would start considering them a fundamental aspect of a browser's interface and not just a fancy knick-knack.

I miss the old Opera so much :(

4
ssivark 19 hours ago 5 replies      
> Firefoxs decline is not an engineering problem. Its a market disruption (Desktop to Mobile shift) and monopoly problem. There are no engineering solutions to these market problems. The only way to escape this is to pivot to a different market [...]

Privacy is the one problem that Mozilla/Firefox can address, which Google and Microsoft will be fundamentally conflicted about addressing. It is also a growing market; that is the market Firefox should be aiming for!

It seems to me that Mozilla/Firefox folks don't appreciate this at a deep level. They are eroding user trust in the attempt to gather data for engineering better features. Eg. see the recent controversy regarding Firefox's usage of Google Analytics: https://news.ycombinator.com/item?id=14753546 .

I made some comments on that thread, on how Mozilla/Firefox could try to win the privacy market. I don't want to repeat those comments, so I'll just link to them: https://news.ycombinator.com/item?id=14754672

5
dhekir 22 hours ago 2 replies      
Some crappy companies such as Eurostar currently experience issues in their website when using Firefox (e.g. impossibility of using vouchers in some cases), and when you contact customer support, they clearly state that "Chrome is recommended" for better results, and that "there are known issues with Firefox". I initially thought it was due to some Firefox add-ons, but even with all of them disabled, things do work better in Chrome.

I've also seen other (somewhat badly-designed) websites where using Chrome leads to less issues, probably because its developers are only testing with it and using non-standard or legacy features/plug-ins. Because of those issues, I am forced to recommend family members to try Chrome when things seem broken, to the point that some have now switched to it by default. I really hope this will not become another IE-like situation...

6
apeace 52 minutes ago 0 replies      
It strikes me that the reason Firefox rose to prominence in the first place was because of the same thing: web sites all over put banners on the top of their pages (for IE users), saying something like "You should upgrade to a modern browser".

The difference is that in those days, it was the developers of many different web sites doing it. I did it on many sites I worked on. We were sick of working in IE and wanted a browser that followed web standards we could all use.

I don't think Chrome's dominance is a bad thing. Because if Chrome ever breaks the web for developers, we'll just do it all over again (or force Chrome to follow us, as we did with NaCL vs. WebAssembly).

7
dannysu 21 hours ago 7 replies      
It's not just marketing. It's also Google websites that only work with Chrome.

For example, Hangout. I can no longer use Hangout using Firefox.

Or I think Gmail Inbox, which also came out only working on Chrome initially.

It's the sum of all these things that look very much like "best viewed with internet explorer" type stuff. I don't ever want to go back to such a world.

8
rocky1138 20 minutes ago 0 replies      
> Mozilla publishes aggregated Firefox usage data in form of Active Daily Installs (ADIs) here (Update: looks like the site requires a login now. It used to be available publicly for years and was public until a few days ago). The site is a bit clumsy and you can look at individual days only so I wrote some code to fetch the data for the last 3 years so its easier to analyze (link).

These two things are probably related :)

9
carussell 21 hours ago 3 replies      
Side note. From Andreas's post:

> looks like the site requires a login now. It used to be available publicly for years and was public until a few days ago

I'm no longer a Mozillian, but stuff like this is really, really weird. I'm referring in general to things being hidden or locked upMozilla as an organization operated more openly than anything else I can think of, which is part of what used to make it so beautiful (and successful)but specifically, I'm talking about sign ins.

I stopped touching stuff on developer.mozilla.org 5+ years ago (or even consulting it, really), but I was reading some docs on the site last week and saw something that was so outright wrong that I felt it had to be fixed. I tried to, and it turns out that you have to use GitHub to sign in. The idea of requiring a social media sign in for a Mozilla web property is one of the most un-Mozilla things possible and really blew me away.

10
sriram_iyengar 21 minutes ago 0 replies      
I'm a firefox user for a very very long time - i do not remember using IE or Chrome for any serious time - i'm using mac for a decade now and not even safari !Have never found firefox disturbing my dev work anyday.Will continue to use firefox.
11
blunte 21 hours ago 1 reply      
Google definitely has been a (major) contributor to the decline of Firefox, both with all the google site notices suggesting users switch to Chrome and the works-on-chrome-first features of Gmail, Drive, etc. That last issue is years old, but I would bet it got a lot of people to first try Chrome.

Another factor could have been Mozilla's defaulting to Yahoo for search (and the difficulty some people had with changing and keeping the change to another search provider). For quite a few years Yahoo has not been very good at search, and Mozilla's insistence on teaming up with them probably brought Mozilla's name down.

12
Touche 19 hours ago 3 replies      
I still believe that Mozilla biggest mistake with mobile was not Firefox OS, it was that they started on Android too late. They should have been on Android from day one, but they weren't, and when they did build Fennec, it was really bad. They eventually fixed it, but by that point Chrome for Android was already out.

And then they pivoted to Firefox OS. At a time when WebOS had already failed, Nokia had already failed, and the writing was on the wall for Blackberry and Windows Phone. It was already well known that the market couldn't support another mobile OS, and that was the moment they decided to build one, totally bizarre.

I firmly believe that if Mozilla had gone all-in on Firefox for Android at the time when Android's browser was just atrociously bad, they could have been the hip option there, and had a leg-up on Chrome for Android.

To everyone that says "people don't install 3rd party browsers on mobile", that's 100% wrong. Chrome for Android was a 3rd party browser for several years and was popular.

13
owly 17 hours ago 1 reply      
Lots of haters on here! :) Like most of you, I use all browsers to test sites and applications. But Firefox is my main browser on all platforms for a bunch of reasons and I have no issues with performance. It has all the add-ins I need. I like the way it looks compared to the alternatives. The test pilot add-ins have been great. https://testpilot.firefox.com/experimentsAnd last but not least, by using it I'm supporting the open web and not feeding a monopoly.
14
MichaelMoser123 1 hour ago 1 reply      
I would be glad to use firefox on Windows, but there are installation problems, after install the browser crashes on any attempt to use it (on my Linux VM it works just fine).

The firefox people should take care of such details when they deal with the most widely used desktop OS.

15
shmerl 22 hours ago 2 replies      
It is indeed a monopoly problem. Google should be required to give browser choice in such ads, same as MS were.

What I worry about, is the increasing situation of "best viewed in Chrome" and sites starting to break in Firefox. That's going to be very bad.

16
cpeterso 18 hours ago 1 reply      
The article's ADI charts do not account for Mozilla moving Windows XP and Vista users from the Firefox release channel to the ESR (Extended Support Release) channel in March 2017 [1]. New versions of Firefox do not support XP or Vista, but XP and Vista users will continue to receive ESR security updates at least through 2018 Q1. You can see a similar "drop" in Mozilla's Firefox Hardware Report [2].

[1] https://blog.mozilla.org/futurereleases/2016/12/23/firefox-s...

[2] https://hardware.metrics.mozilla.com/

17
jchw 14 hours ago 1 reply      
I find it pretty amusing that nobody is going to acknowledge the idea that maybe, just maybe, there's also a component of the fact that Firefox has simply fallen behind Chrome in many aspects, losing the preference of many developers and power users. They are far from the majority, but there are without a doubt cascading effects. Google's marketing is probably only getting more aggressive because there's going to be diminishing returns the further they go.
18
blauditore 17 hours ago 1 reply      
I've been saying this for years, that Chrome's market share is mostly caused by Google's aggressive advertisement. Many users don't even know exactly what a browser is, they just clicked that button at some point because the text next to it told them to do so.
19
johndoe489 3 hours ago 0 replies      
I originally switched to Chrome soon after it came out because it was fast.

I still use it today because usability wise it's just better for me.

I can't for the life of me get used to a separate search box. The "omnibar" is simply fantastic. Coupled with turning off "search suggestions" in the Settings, you have a wiki on hand pretty much. Anything you type will match a personal bookmark, or a personal search. Or title of a page visited earlier. This means I don't need to make bookmark in many cases. I can also manage omnibar to give optimal results by making random, useless searches in a private window, which again, is so easy to use in Google Chrome (Ctrl Shift N). And then if a search match is inconvenient for speed or just not useful anymore, just shift+del to remove it.

Firefox completelty lost me when I looked back and it was like version "52" instead of the version 14 or something I was one, just a year or two later. I was like "what the hell??" "WOW what are all these amazings updates they made?" Only to realize barely anything changed at all.

And lately they just lost me completely as a developer. They wanted to integrate the Firebug extension, arguably the most useful aspect of Firefox for developers. I kept using Firefox for firebug for years, while Chrome was my main browser. But since they integrated it, it just performs worse. It's so damn slow and unusable, meanwhile Google console just gets better and better.

20
notatoad 20 hours ago 3 replies      
With every new version, i give firefox another try and it always just feels sluggish compared to chrome. The UI is not as responsive and the pages don't seem to load as quickly. I don't know if there's any actual data or measurements to back this up and i haven't tried to measure any speed differences, but for me the reason I use chrome instead of firefox is absolutely an engineering problem and not a marketing one.

I'd much rather use a Mozilla product than a Google one, but chrome is simply a better browser.

21
rossdavidh 22 hours ago 4 replies      
While Firefox on mobile is virtually nonexistent, what this post asserts just doesn't look true to me. He's basically asserting that Chrome is where Internet Explorer was in the late 90's, but when I see what browser people are using for presentations, or when I am pair-programming or otherwise able to see directly what people are using, I see Firefox commonly. Outside the U.S., I don't have much visibility, but the StatCounter data (https://www.netmarketshare.com/browser-market-share.aspx?qpr...) which shows Firefox on the increase in the last year, looks a lot more like what I am witnessing.
22
buster 3 hours ago 1 reply      
So not true. I try Firefox once in a while, but Chrome still is more responsive and has the better UI -> better UX.

Basically i am waiting for a Servo-based browser which will hopefully change the UX in favor of Mozilla again.

Oh, and PLEASE Mozilla. Unify that f* search toolbar into the adress bar, already. It's stupid.

23
FollowSteph3 20 hours ago 1 reply      
I disagree with the article. When Firefox first got popular the default was internet explorer which was already installed on your computer. However because Firefox was so far ahead word spread and people took the time to install it.

These however there is no really big advantage to using Firefox over chrome, and when the difference is that close marketing and convenience will win. In other words if Firefox would've been on or with internet explorer years ago it would never have gained the market share it did in the first place.

It's not just a marketing issue but a combination of a marketing and engineering issue.

24
remir 17 hours ago 0 replies      
The reality is that for a while, Chrome was simply a better browser. Extensions "just worked", it silently auto-updated (huge for non technical users), was very secure (anti-phishing), it came with Flash, sandboxing from day 1, etc...

I installed Chrome on the PCs of family members and it was trouble free for them. No need to update Flash separately, no random crashes, the anti-phishing is great, too.

25
PeterStuer 7 hours ago 0 replies      
Long term FF user here. I still use it as I stand behind the independence, but ...I have found FF speed and stability gradually lacking. What was once a fast and lean browser has turned into a behemoth. Of course, part of it is beyond their control as it seems more and more publishers only QA on Chrome nowadays leaving FF behavior in the 'hope and pray' category of UX. I'll stick with it for now, but saying I'm at the verge of switching wouldn't be far from the truth. If it were not for the ideological, I would have switched to Chrome long time ago.
26
moocowtruck 21 hours ago 0 replies      
I was expecting a bit more than blaming google... The reason I stopped using firefox is because it became nothing more than a 'meh' chrome clone and slowly killed its ecosystem.
27
swiley 19 hours ago 0 replies      
They argue they're privacy minded and then remove control from the user.

Everyone who doesn't care about control is just going to use chrome, edge or IE so going after that market is probably not a good use of resources.

I don't quite get the whole performance thing, chrome eats memory constantly and trashes the machine which is something firefox doesn't do. It's single threaded though so shitty pages will hang it.

28
reacweb 7 hours ago 0 replies      
For me, the compelling feature of Firefox over chrome is that using Firefox portable, I can avoid company policy and configure proxy to bypass bluecoat filter.
29
mcjiggerlog 21 hours ago 2 replies      
I really want to like Firefox Android (addons are awesome!) and try it out every now and then, but every time I just end up uninstalling and reverting to Chrome.

The number one reason is that scroll seems to work differently to every single other app I have installed. It's "sticky" and doesn't feel native. It also takes a noticeable amount of time to render the page when scrolling quickly, which is not something I've ever noticed with Chrome. What gives?

30
zimbatm 6 hours ago 1 reply      
Chrome has other advantages as well.

If you buy into the Google Suite then you get synched profiles. Firefox has the same but the account is only useful for keeping Firefox in sync whereas Google's also give you access to all their other products, plus oauth to third party services.

Google Chrome exists inside of an ecosystem, which means that is stays simple. On the other hand, Mozilla has a tendency of treating the browser as a goal in itself, which is understandable but creates things like the Pocket extension and other UX complexities.

Android's unremovable Google search doesn't open the default browser but presents the result in a Chrome WebViewer.

31
Rjevski 4 hours ago 0 replies      
One of the issues I see with Firefox is that they did some stupid stuff like Pocket, Hello and this awful Australis UI that as a result alienated a lot of power users.

Power users are Firefox's best chance at regaining market share, and some of those users are now gone as a result of Mozilla's stupid decisions.

32
ksk 21 hours ago 1 reply      
Its quite surprising that Google has avoided anti-trust scrutiny for as long as it has.
33
Aissen 22 hours ago 1 reply      
I've been a firm Firefox on Android user for years, but I recently switched to Brave. While Desktop performance is acceptable, Android cold-launch performance is very bad, and Chromium-based browser beat it to the punch. And the native (implemented in C++) adblocking means better performance than uBlock Origin.

Too bad, I really liked Firefox Sync, it was such a superior solution (for privacy, at least).

34
iopq 19 hours ago 0 replies      
I love the chart that goes from -7% to -22%

it cuts off exactly where you would think there's ten times fewer Firefox users

35
dep_b 20 hours ago 0 replies      
I don't use Firefox that much because I'm mostly on macOS, but every time I use Windows and I open Firefox it seems more snappy again. I am making sure nobody in my family uses Chrome because it's a resource hog and effectively helps the same kind of monopoly we had with Internet Explorer.
36
gator-io 20 hours ago 0 replies      
Here is a view of browser market share with detectable bot traffic removed.

https://truemarketshare.com

Firefox is dropping, but not collapsing. And my opinion as to the primary reason why is the Yahoo default search.

37
dandare 6 hours ago 0 replies      
Dear Mozilla team, I for a change think Chrome is better browser than Firefox.

I am not talking about the performance of JavaScript, compliance with standards or developer tools, no, I am talking about Firefox's outdated UI and inconsistent user experience. Chrome is slick and fast while FF often lags, wastes space in the tabs and address bar and confuses me with additional search bar.

38
nevir 14 hours ago 0 replies      
> This explains why the market share decline of Firefox has accelerated so dramatically the last 12 months despite Firefox getting much better during the same time window.

(this quote is from the article, in reference to Google aggressively advertising Chrome)

I'm pretty sure that all the ads mentioned in the article have been around for far longer than 12 months. What else might have happened 12 months ago to influence the decline?

39
rrggrr 21 hours ago 0 replies      
Extensions are tipping in favor of Chrome. Many of the extensions I use are Chrome only.
40
twobyfour 4 hours ago 0 replies      
Returning to a browser monoculture would be a loss for the web and its users.
41
self_awareness 4 hours ago 0 replies      
I didn't switch from Firefox to Chromium because Google puts the "Chrome" name all over the place. I did the switch because Chrome is 2x faster than Firefox.
42
ashitlerferad 18 hours ago 0 replies      
Since my 10AM EST blog post comment has not been approved . I'll paste it here:

"...the falling off the cliff is just the snowball effect of bad management and decisions made many years ago. Its to late now to stop the bleeding as-is. The solution is right there, although obvious, its probably to much for Mozilla to undertake at this point."

43
makecheck 20 hours ago 0 replies      
I really wish Google's Chrome spam wasn't "working" because I am so tired of it (and anything like it). This is a variation of the "Here's what's new in the app that you didn't know you updated!" dialogs that developers seem to like now.

If I could have software and services not totally derail what I was trying to do, that would be greaaaaaat.

44
rubatuga 19 hours ago 4 replies      
Well maybe if they updated their shitty UI, I would be inclined to install it. Why cant firefox combine the search and address bar like every other major browser? Why cant Firefox ditch their slow animations, buttons, menus, and do with less skeumorphisms? They need a serious refresh if I were to ever start using it again.
45
HellDunkel 20 hours ago 0 replies      
I know how much better Chrome is yet i stick with Firefox all because of the idea of a free web.

It is slow. the ui sucks. it looks dated. it crashes far too often and eats up loads of mem. Don't blame Google for its ads, the problems are homegrown. Its sad to say this but i guess i will turn my back on it too if things dont change.

46
spiderfarmer 9 hours ago 0 replies      
I love you Firefox, but you're horrible on retina screens. Just scrolling takes 2 times as many CPU cycles when compared to Safari. It's troubling because I'm the biggest Firefox supporter I know and even I switch to Safari when I hear the CPU fan spinning.
47
digi_owl 21 hours ago 0 replies      
For me at least Firefox have been burning bridges like crazy.

The change in UI to Australis i could deal with, as it could be mitigated with extensions.

But "recently" they changed to GTK3 on *nix, and are now in the process of making extensions less potent.

All this makes it harder to continue using Firefox where it used to be the flagship browser.

48
abiox 15 hours ago 2 replies      
> Firefoxs decline is not an engineering problem

possibly. however for me, technical problems are why i avoid it in general.

i still use it a bit, as i'm lazy about switching between user accounts with various services and separate browsers makes this easy.

sadly, nearly every day firefox will crash, often when i'm not even using it. it happens so often i don't even get annoyed anymore... it's just normal. my system is a fairly new build and nothing else crashes (or at least, so infrequently i don't recall anything).

49
corford 20 hours ago 2 replies      
Maybe Firefox is slow on Linux but on Windows I don't notice a difference between it and Chrome. If anything FF starts faster on my Win10 box. The UI is just as snappy and I vastly prefer FFs settings dialogs to the kid gloves one in Chrome.

Also can't remember the last time FF crashed on me (and I usually have hundreds of tabs open for weeks/months on end).

Dev tools are a toss up but I tend to use the ones in FF more than Chrome, probably simply out of habit.

Once servo becomes mainline (and assuming it delivers on its promise) I can't see why anyone would choose anything other than FF.

\_()_/ works for me

Edit: I'm not big on extensions but do have a few installed: session manager, foxyproxy, one tab and fireshot.

50
faragon 3 hours ago 0 replies      
Firefox: please lower the priority of the religious stuff (Rust, etc.), and increase the priority for actual work involving better user experience.
51
morekozhambu 21 hours ago 0 replies      
I was a firefox fan until recently. I guess it was firefox 51 or so and I switched to chromium purely for usability and performance sake. The page loading and bookmarks management was horrible at that point. Not sure how it is now.
52
baalimago 3 hours ago 0 replies      
firefox is important.

don't let it fall

53
bla2 22 hours ago 3 replies      
Google has been pushing chrome on their sites for years. Firefox's drop in desktop is recent. So just marketing can't be the explanation.
54
badpenny 20 hours ago 0 replies      
Now and again I'll try switching to Firefox but it's just incredibly sluggish compared to Chrome so I end up switching back.
55
tschellenbach 20 hours ago 0 replies      
Chrome is just a (much) better product. Combination of building a better product and a lot of advertising.
56
bahjoite 20 hours ago 1 reply      
Not included in these numbers are installs of Trisquel's Abrowser and The Tor Project's TorBrowser. Both are rebadged Firefox and neither one is downloaded from or phones home to Mozilla. I don't suggest that this would make much difference to the numbers.
57
tonmoy 19 hours ago 1 reply      
Forefox installation numbers maybe declining, but how does it compare with any browser install? Maybe desktop growth has stagnated, maybe with always updating OS and Firefox itself, people just don't need to "install" Firefox anymore?
58
zimbatm 6 hours ago 0 replies      
One thing the author didn't touch upon is the amount of manpower available on both sides. I am under the impression that Google has much less people involved in the construction of their browsers.
59
fimdomeio 20 hours ago 2 replies      
I want to use firefox, I really do. But I can distinguish when it's running and when it's not by my macbook fan noise. And yes I've tried all kinds of clean ups. but it just sits there on the background consuming 40% of a cpu while doing nothing.
60
kevin_thibedeau 10 hours ago 0 replies      
It couldn't possibly have anything to do with breaking extensions once again.
61
norea-armozel 19 hours ago 0 replies      
I think half the problem with Firefox is that it has a marketing problem. Most folks today just trust Google and so Chrome is a product that has trustworthiness that will stand out for folks especially on the matter of speed/reliability. If Mozilla wants to do anything to save their project then they have to start re/building their brand recognition and trustworthiness among COMMON USERS (technical users tend to inform themselves so it's really not an issue IMO beyond actually talking to us). It'll be an uphill battle all the way but I think they'll find it's worth it.
62
satysin 20 hours ago 0 replies      
I can only speak for myself but I didn't leave Firefox for Chrome because of advertising. I left because a year ago Firefox was painful to use. Sync was (might still be?) incomplete, setting up quick searches was annoying, font rendering was poor, HiDPI support was crap, overall performance was noticeably slower than Chrome and they announced killing off advanced XPCOM based extensions so I figured I would just change over now rather than later.
63
maxharris 19 hours ago 0 replies      
I don't use Firefox because it's a power hog compared to Safari.
64
hendersoon 17 hours ago 1 reply      
I used Firefox since it was called Phoenix in 2002. Fifteen years. None of my friends or acquaintances used Firefox. I was the last man standing.

I switched to Vivaldi last month due to webextensions breaking fully functional mouse gestures in the Firegestures addon. They finally forced me away. Thankfully Vivaldi exists!

65
Karunamon 21 hours ago 0 replies      
I really don't think the author backed up their hypothesis here. I'd place a lot more of that blame on Mozilla's poor decision-making (detailed elsewhere in this thread) than any amount of google.com popups.

If I were to boil it all down, (and I say this with zero snark), I'd say that they have little to no differentiation with Chrome. It looks like Chrome, it will soon be no more powerful than Chrome, it's developed ignoring community input like Chrome, and the kiss of death: it performs worse than Chrome.

With all that in mind, why not just use Chrome like those popups suggest I should, and get a speed boost while I'm at it? (Note: open source politics do not factor into this)

66
smegel 10 hours ago 0 replies      
Prevent Javascript from running HTML5 videos and I will switch in a heartbeat.

But I guess Mozilla is just as corrupt as Google...

67
ue_ 22 hours ago 8 replies      
I've seen people frequently say that they don't use Firefox because Chrome is faster, and despite being a Firefox user myself, it's close to what I've noticed. In Chrome (on GNU/Linux and mobile at least), pages seem to load instantly. I don't know why that is, but apparently it's not just me who has noticed this. Meanwhile, the most frequent complaint about Chrome is RAM usage, and only when using many tabs. Most people don't use many tabs.

It's a shame that Chrome which appears to be on track to become the most popular browser by a considerable margin is proprietary software. And before I get a reply telling me that Chromium exists, I know that - but I also know that it's not Chromium that's popular.

I think it is also a shame for two more reasons: Mozilla wants to make Firefox look like Chrome, probably to replicate features which seem to draw users in, by changing the extensions API to make it less powerful, by supporting standardised DRM in the browser (though this is a different issue) etc. Secondly, we may see a world in which only Webkit matters, and standards no longer rule, similar to the situation with Internet Explorer years ago. This will also put pressure on Mozilla and other "third party" browser authors to support features just because Webkit supports them, or even to break standard features so that they render like they do in Webkit.

I'd probably get shouted at for thinking it would become a "monopoly", but that's exactly what it is, just not in the legal sense.

68
oconnor663 22 hours ago 1 reply      
> monopoly position in Internet services such as Google Mail, Google Calendar and YouTube

Seriously?

69
cocktailpeanuts 20 hours ago 0 replies      
I have both Chrome and Firefox installed but try very hard to stay away from using FF unless I'm testing cross-platform stuff or if I want to sign into multiple accounts of a same service (one on chrome and one on firefox)

And this has nothing to do with monopoly. That's just a rationalization for their fuckup. I don't even know where to start, let me just list a couple:

1. The "Yahoo.com" by default is the worst: I know users can switch to google, etc. but if a developer like me doesn't even want to go through trouble, why would any ordinary person go through all the trouble when they can just use chrome? And we all know Yahoo doesn't provide customer-centric search results but ad-optimized results to squeeze out revenue.

2. Bad performance: YES IT IS ALL ABOUT ENGINEERING. As someone who keeps a lot of tabs open I can't use firefox because the cpu level reaches the stratosphere if i keep opening tabs and leave them around. The firefox browser performance sucks. Period.

But I think the main reason FF is failing is because the developers are out of touch with the reality, just like in this article where one of the developers complain it's because Google is pushing chrome through monopoly. He's forgetting that before Chrome, it was Firefox who won despite MS pushing IE through monopoly.

If the developers were more self-aware, they wouldn't have let all this happen.

13
How a Reddit forum has become a lifeline to opioid addicts in the US theguardian.com
268 points by urahara  23 hours ago   347 comments top 20
1
gooseus 22 hours ago 26 replies      
What I find fascinating and disappointing is how much the opioid crisis isn't being talked about or addressed proportional to other societal issues.

Billions of dollars of private research being poured into self-driving cars by our greatest minds and millions of dollars in lobbying against gun laws all in the name of preventing unnecessary human deaths... yet according to Ben Bernanke (and his references), opioid overdose killed more people in 2015 than automobile accidents and firearms related crimes combined [1].

I'm curious whether the disproportionate concern has more to do with the perception of drug addicts as weak and deserving of their fate or because they're not a group that can be profited from politically or commercially or is overdosing just not as easy to solve as gun crime or automobile deaths?

Personally, I find all those excuses to be sad and bullshit so I'm hoping it's something else entirely.

[1] https://www.brookings.edu/wp-content/uploads/2017/06/es_2017...

2
VonGuard 10 hours ago 2 replies      
I am almost ashamed to admit I have read these forums for years as a sort of exercise in voyeuristic schadenfreude. I just love reading about drug culture, watching drug movies, etc. Just like I love gangster movies. I can't explain it. A few things I have learned:

/r/glassine is probably the most interesting. They rate heroin bags in the Pittsburg area. It's supposed to be for everywhere, but it's mostly Pennsylvania

/r/opiates is a place where addicts confide in each other and practice harm reduction. They do not source. They tell hilarious, sad, amazing stories. Lots of personal confessions and "whole life of an addict" style narratives. Overall, a good community for addicts to find a safe space, instead of a place that hounds them for not being in recovery.

/r/stims is where the meth heads hang out. Occasionally you get these great "I'm on meth and here's everything in my mind right now" text barfs, but mostly I feel like this is an empty sub

/r/researchchemicals is where people discuss Shulgin chemicals and beyond.

/r/drugnerds is amazing. Papers. Lots of them.

/r/drugporn is where people post photos of their drugs, and then a week later the picture is taken down because Missoula Police saw it and arrested them through the GPS sig in the photo.

/r/noids is where people discuss synthetic cannabinoids, which are horrible. Never use these.

The rabbit hole goes very, very deep. Check /r/DarkNetMarkets/

3
averagewall 11 hours ago 3 replies      
In the HN-popular utopian future where automation has made most people unemployed and living on UBI, we're supposed to be able to pursue our dreams without the pressure to feed ourselves. But in reality, many people don't have dreams or the motivation to pursue them and end up as drug addicts/alcoholics/gangsters which is just easier.

I used to laugh at the idea that people need jobs to feel fulfilled. I thought those must be quite helpless people who can't even make their own hobbies. But from personal experience, I found that dreams are for young people and people with demanding jobs. They're a grass-is-greener fantasy when they're out of reach but deteriorate into boring unrewarding work when you actually do them. Working for a company is especially fulfilling because you more productive than you can be on your own. You feel more useful. You feel important and needed.

So I think the idea of widespread happy unemployment isn't going to work. It might still happen but I think it'll be a tragedy, not a paradise unless we can find something else that takes the place of work.

4
mherdeg 22 hours ago 2 replies      
Not covered in this Guardian article: the somewhat scarier /r/opiaterollcall (recently banned) and /r/cripplingalcoholism (not scary, just a discussion forum).

I have been consuming reddit via the https://www.reddit.com/r/all/gilded/ feed and it is just WILD what kind of weird and worrying stuff is going on.

5
Animats 17 hours ago 3 replies      
Around my area, I believe a lot of people use [opioids] out of boredom. Theres no jobs, no way to have fun besides video games and riding four-wheelers and motorcycles. Theres nowhere to go except a run-down mall over in another county.

Sizable numbers of people get into opiates out of boredom? I thought this was driven by people with chronic pain.

6
unionjack22 19 hours ago 4 replies      
I can't recall a similar degree of concern and push for treatment during the crack/cocaine epidemic in the 80-90's and the meth epidemic of the 00's. What is it about the opioid/heroin epidemic that differentiates it from those prior?
7
abrkn 6 hours ago 0 replies      
In other news, Hansa Market, a Dark Web Marketplace, Bans the Sale of Fentanyl[1]

[1] https://www.nytimes.com/2017/07/18/business/dealbook/hansa-m...

8
socrates1998 16 hours ago 0 replies      
I wonder if marijuana legalization is inversely correlate with this issue? As in, the areas that have legalized marijuana have seen a drop in opiates?
9
RealityNow 7 hours ago 1 reply      
What is the solution to this opioid epidemic?

As an outsider not well-versed in this topic, my guess is that the root cause here is hopelessness, struggle, and boredom caused by poverty and unemployment.

The solution then would be to employ people and give them a sense of purpose, or at least get them out of poverty.

I'm a huge proponent of a universal basic income (UBI), though I'm not sure that a UBI would fix this problem. Thus it seems as if some sort of government jobs program may be necessary. Giving people meaningful well-paying jobs in science and technology would do wonders in getting people off these ridiculous addictions.

10
naiveattack 8 hours ago 0 replies      
TED: Everything you think you know about addiction is wrong

https://www.ted.com/talks/johann_hari_everything_you_think_y...

11
ryfm 16 hours ago 1 reply      
i stopped smoking thanks to r/stopsmoking. 500 days and going strong.
12
nickeleres 11 hours ago 0 replies      
I just scoured that sub for 30 minutes and all I saw was people bragging out their pills, showing off their heorin, and fantasizing about using Fentanyl....
13
OscarTheGrinch 22 hours ago 1 reply      
r/stopdrinking is also a very supportive community.
14
corndoge 22 hours ago 1 reply      
There are tons more of these forums and Reddit is probably the least trafficked out of all those I know about.
15
rhcom2 21 hours ago 0 replies      
/r/darknetmarkets was a really interesting place after the Alphabay shutdown too. A lot of opioid addicts very scared about their supply and withdrawal symptoms.
16
fapjacks 22 hours ago 2 replies      
Kratom saves lives.
17
virtuexru 22 hours ago 0 replies      
The fact that fentanyl (which is 100x more powerful than morphine) is so readily available/mixed with common drugs across the United States is excruciatingly horrifying.
18
unabridged 21 hours ago 5 replies      
>The obvious counter-argument is silencing them strengthens their argument and makes them a martyr. I don't buy that argument at all, it's far more dangerous to allow them to indoctrinate and appeal to all the fringe disenfranchised youth which they've become frighteningly effective at. In any case it's clear reddit can be used for good like in this article, I'm just not so sure it's a net positive to society as another poster argued.

This is the road to censorship, burning books, and confiscating servers. Some people are fine with censorship, because they imagine people who think like them as the censors.

You can only fight ideas with better ideas. Kicking them out and silencing them, says to them you can't compete. You can't offer them an argument as to why they should tolerate foreigners, other races, etc. It seems obvious to you now, but the ideals of tolerance and equality took years and years of discourse to dominate public consciousness.

19
icpmacdo 22 hours ago 7 replies      
I think Reddit is interestingly a net social positive. For all the bad things that come out of it you can often see users helping others in a pretty significant way.
20
tigershark 21 hours ago 4 replies      
If you ask me I'd rather spend money to save a child killed by a car, financing self driving cars, than waste money trying to save people that don't want to be saved and that, even after being revived tens of times, continue to abuse drugs until their death, as per the other thread last week.
14
Show HN: Cinc GitHub for recipes cinc.kitchen
350 points by keithasaurus  1 day ago   151 comments top 59
1
bruce_one 1 day ago 3 replies      
I'm a huge fan of the fact this does weight conversions (cooking by weight is now my favourite, and I've been looking for a site that is just weights, so this looks like a win to me :-) ).

One note I'd make, though, is that US, UK and Australian cups are all different sizes. (and possibly others?)

And from the one recipe I saw it looks like you're using UK cups? But, possibly not as well... Either way, it might be worth calling that out, or allowing for cup-type specification similar to how you do for weights?

Somewhat related, some sites call out egg weights too, and that might be something to try and do here too? (Because, if nothing else, egg weights are often written on the carton which acts as a guide; even if people don't actually weigh them.)

Another thought, again... Some kind of showcasing? A la Github's explore, or even "awesome" pages? I might be off, but to me discovery is really important for recipes and I love browsing "cookies" or "desserts in a hurry" or similar; and pages akin to that with community curation could be nice?

Hmm, another random idea... Some kind of more granular forking to facilitate things like "do you have a stand mixer?" and that kind of distinction? (Coupled with "equipment switches" maybe?) In the past I've done recipes that required something I didn't have, and I've had to tweak fairly aggressively to make it work, but when/if it did work, maybe adding that feedback into the recipe would be valuable? e.g. user ticks "don't have a dehydrator", and recipe tweaks to "use your oven and set it low", etc.

Anyway... It looks awesome! Keep doing what you're doing, and I'll use it :-) Just some random thoughts that came to mind :-)

2
NickBusey 1 day ago 5 replies      
This looks promising and fairly well done, but it's lacking a few critical features IMO. The first obvious one is a way to Diff a recipe and it's Forks (unless I missed that).

The harder and maybe more important one, is an issue that GitHub itself still hasn't figured out how to solve either. There should be a process for a fork of a recipe being able to explain why it is better than the original, and have the fork be able to be voted on so that the 'best' fork as voted by the most users becomes the canonical 'Chicken Noodle Soup' recipe, or at least display the forks on main recipe ranked by popularity.

3
xyclos 1 day ago 1 reply      
Should have the fork icon be an actual fork rather than the github icon.
4
Jaepa 1 day ago 3 replies      
The idea for this has been around for a for a while. http://forkthecookbook.com/ goes back to 2012.

And its an interesting idea. Recipes can't be copyrighted and and recipes are generally derivative. The data is generally well structured, and fairly standardized (ingredients, equipment, instructions, photo, with optional fields for prep & cook time, servings, notes, difficulty, etc). But there are two primary issues:

1. Recipes don't really have a single inheritance. For example, I cook a lot & really enjoying cooking, but when I'm trying to make something new, I won't follow a recipe. I will read a bunch recipes, and try to understand the underlying ideas & steal the ones I think are interesting, then implement my own. So say I look up Tufo Matar find 3 recipes the make my own, if I want to contribute which recipe do I fork?

2. This may have been dealt with by the use of stars. when I was using forkthecookbook, there was no way of "bookmarking" recipes and the so users forked them, which lead to the results having huge number of identical forks. That being said it seems like the star system may resolve that. But currently it looks as though unmodified copies of the recipes still appear in the Forks list, which makes it harder to find benifical changes. Also it would be nice to have a history section with a message summarizing what changed.

All that said this does look nice.

Additionally, a nice feature to have would be to "import" a recipe, though since phrasing of a recipe is protected this gets a little bit legally complicated.

5
zdrummond 1 day ago 4 replies      
Nice!

It is clear you put a lot of care into this, and I am sure there is a bucket full of features you want to get to, but I have a big request.

What I really want is one step beyond a place to store recipes. I want a meal planning site! I want to create a pool of recipes that we like, and plug in how often this week we will eat at home. Then outcomes a grocery list and a plan for each day. Maybe it even sees what we have liked, and suggested new recipes to add to the pool.

I am this close to pulling the trigger on PlateJoy, but my biggest hurdle is I can't add recipes I _know_ we like to their list of experimental (to us) meals.

Really, I don't mind/enjoy cooking, but never seem to carve the time out to plan an entire family of four's meals a week in advance.

6
keithasaurus 1 day ago 2 replies      
If anyone wants to know the stack it's Django/Postgres on the backend, and an Elm SPA frontend. All data goes through an undocumented REST API.
7
justboxing 1 day ago 1 reply      
This looks great! Congrats on shipping!!

I like that you have a scaling feature. => https://www.cinc.kitchen/info/features

I know this may be too much to ask, but if you are taking requests, some basic nutritional info (ex: Protein content, avg. calories) might make it even more awesome for those of us tracking daily calories, protein (for atheletes etc). I understand things like sodium, fat might vary depending on how much salt or oil the person cooking the recipe uses, so maybe this might not be feasible to implement...

8
roryisok 1 day ago 1 reply      
I love this, signed up straight away. I've also recently discovered cookingforengineers.com and I love their card recipe system. it would be so cool if you could add something similar - I hate the traditional model of recipes, I always have to read and re-read them several times.

I'm not sure how you would go about adding this though, its quite different from the structure you have already

9
mch82 1 day ago 0 replies      
Maybe a button to order the ingredients from Amazon or similar?

Edit: And maybe a shopping basket in case someone wants to order ingredients from a few recipes.

Edit (again): And don't forget about letting people order the equipment too.

10
acalderaro 1 day ago 2 replies      
Someone correct me if I'm wrong, but should the "What's the name mean" in the about section be "What does the name mean?" or "Whats the name mean?"

The first is "proper" the second is colloquial. At least that's what I thought.

11
jordanwallwork 1 day ago 1 reply      
I went for a pretty ambitious test recipe (Heston Blumenthal's Egg in Verjus, Verjus in Egg), unfortunately I'm not able to save it - it's complaining that one of my ingredients is an invalid weight - 3.3 grams Gellan F. If I remove it then it complains about the ingredient before it, so I'm wondering if there's an ingredient limit? It's at about the 40th listed ingredient. It took me ages to input everything so I do hope I'm able to save it!

The recipe entry experience was great though, some small details that I think would improve things even further:

- Esc should clear the 'text entry' modal, I kept clicking this by accident when wanting to add a new section heading and it was a nuisance having to click the 'close' link

- '+ Ingredient section' should replace last ingredient row if blank

- Would be nice to have section headings (similar to 'ingredient sections' for recipe methods) to break up recipes with multiple discrete sections - Hard to find errors in long recipe. Could be more prominant, or add a 'jump to next error' button?

12
AAAton 1 day ago 6 replies      
neat idea!

A weird piece of feedback: Something about the UI gives me a substantial feeling of loneliness.

13
trwhite 1 day ago 0 replies      
Nice idea. You should get Schema Json (http://schema.org/) on this so the recipes can be crawled properly.
14
gyrocode 1 day ago 1 reply      
Interesting idea... Phrase "fork a recipe" just got a new meaning.
15
omnimus 1 day ago 0 replies      
Is it open-source project? Are you looking for contributions? I am passionate cook - designer - frontenddev. I struggle with recipe sites and where to save mines. If this was somehow libre and had future i might want to contribute.
16
fanpuns 1 day ago 0 replies      
Nice project, I like the format of entering new recipes. I think you will get good uptake even from users who don't know what GH is :)

Is this project open source or does it plan to be at some point? I've looked at some of the other projects out there that are similar, but many seem to die based on the founder running out of steam or getting busy with other stuff. I would really be interested in contributing to this or a a similar project if anyone has a suggestion for one that is open.

17
DerfNet 1 day ago 1 reply      
this is a great idea. How many times have you looked at reviews on recipes.com or whatever and the first 10 include a half dozen substitutions, basically making a different end product entirely? each of those reviews could instead be a fork. awesome!
18
LostCharacter 1 day ago 1 reply      
Nice site! I look forward to using it in the future. One thing - when using lastpass to generate a password, it fails to fill the first password and only fills the "repeat" portion. Likewise, it fails to fill the username field for login.
19
azeirah 1 day ago 0 replies      
Please pay someone to add a few thousand recipes, and please keep working on this for a few years. This can be huge.
20
overcast 1 day ago 1 reply      
Looks like a more complex version of what I had created with imadefood over a year ago. Similar features, with the branching, etc. I've been working on an iteration into a slightly different direction. As it didn't pick up any steam. Good luck :)

As others have stated below, forkthecookbook, forkingrecipes, and also recipelabs. All do basically the same thing. I just don't think there is enough market for it. Certainly was a fun little project though!

https://news.ycombinator.com/item?id=10853665

21
kwhitefoot 7 hours ago 0 replies      
Minus marks for blank window if JS is disabled.
22
melicerte 1 day ago 0 replies      
Nice idea. Just that code is a universal language, english is not. Is there any way to handle multiple languages for a same recipe outside of forking? forking is one way to adresse this issue but there would not be any other value to the fork than translating a recipe.

Just asking

23
joepour 1 day ago 0 replies      
This is really cool, congratulations on shipping!

Have you thought about simplifying the UI by changing "Forks" to something like "Twists" or Takes (as in 'my twist on' or 'my take on').

We all understand what I fork is but the average user will likely get confused, and not just because a fork is a kitchen utensil!

24
bruth 1 day ago 1 reply      
Wow! My friend and I had this idea years ago.. possibly pre-GitHub. I am glad someone finally made it! Well done. My particular interest was seeing how a recipe deviates from the original.. a recipe graph of sorts. That would a fun way to visually explore and find related recipes that are similar in ingredients.
25
whatnotests 1 day ago 1 reply      
AMAZING idea.

Please allow me to G+ connect or facebook connect, b/c I don't want to have yet another password to remember, and I'd like to (maybe?) share some activity on Cinc with my FB peeps.

Just a thought.

26
jstoja 1 day ago 0 replies      
I really like it. I often look for a receipe with many likes online and have to read dozens of comments to adjust it...

For example the receipe of some cake where nearly all commenters advise to put 1/4 of the sugar advised. If you don't read the comment, chances are that even with a recipe approved by many people it tastes like shit.

I really hope this will grow and success!

27
taherchhabra 1 day ago 1 reply      
Is there a way to compare how it actually tastes? , These days I am baking cakes by watching recipes on youtube, the texture always comes correct and I use exact weights as described but the taste is somewhat lacking.It would be good if we can give reference to a local cakeshop for its similarity to the recipie
28
x0 1 day ago 1 reply      
Is there a way to do pull requests? I'd like to go through and convert a few people's F to C.
29
midgetjones 1 day ago 0 replies      
I think this is a brilliant idea, but I wonder if the terminology should be changed? I think the concept of saving a copy of a recipe, then editing it would come much more naturally to the 99% of people who have never heard of github.
30
INTPenis 1 day ago 0 replies      
But it's not really like github for recipes until you solve the url interface.

I should be able to go to a user and their recipes with the same ease as on github. Having unique IDs for recipes exposed in the URL isn't really necessary.

31
enobrev 1 day ago 0 replies      
I've been wanting to try this for a while. This looks great! Nice to see some excellent suggestions in this thread as well, that I definitely never thought of. I hope this is successful, as I'm a huge fan of the idea.
32
rkuykendall-com 1 day ago 0 replies      
Love it!

Would be most useful to me with calorie support. I see some users are hacking it by adding it to the title or in the notes. That should be a strong hint.

33
jerrysievert 1 day ago 1 reply      
might I ask that recipes be presented in h-recipe format? (http://microformats.org/wiki/h-recipe) it's simple to do, and works really well.
34
joshumax 1 day ago 0 replies      
I started a teensy bit of work on something sorta like this a while ago: https://github.com/joshumax/git-cooking

Glad to see somebody actually brought a similar idea to fruition :)

35
yousifa 20 hours ago 0 replies      
Would like to pm you. Can you please put your contact info in your profile or email me (in my profile)
36
TekMol 1 day ago 0 replies      
Interesting project, well executed!

Are you storing the recipes in Git repos or in the Postgres DB?

If in Postgres, what is the format? Do you put each receipe in a single JSON field? If not, what does the data structure look like?

37
qrv3w 1 day ago 0 replies      
This is cool!

A couple weeks ago I was looking for a way to find similar recipes (forked recipes, in a way) and I ended up making my homebaked solution. [1]

[1] https://timetomakefood.com/find

38
amadeusw 1 day ago 1 reply      
Looks great! I'm inclined to host my recipes there.

But before I do that, what is the future for this site?

* Do you monetize by getting a cut from the shopping cart? * Will I be able to easily download my data in the future, like I do with git repositories?

39
52-6F-62 1 day ago 2 replies      
This is a great idea!

My one criticism right off the bat is the name --the pronunciation isn't immediately obvious. ("Sink?", "Kink?", "Kins?", "Since?")

Then again, I don't know if that's just me...

40
julee04 1 day ago 1 reply      
this is a crazy fast site! can you share what you are using to host and serve it?
41
linopolus 1 day ago 0 replies      
Another site totally unusable without JS enabled, where it could be just used to add dynamics to otherwise nicely generated HTML..
42
macygray 1 day ago 0 replies      
I have a feature request: add diffs and ability to show who have starred and who have forked your recipe
43
amelius 1 day ago 2 replies      
> Chicken Alfredo Pasta with Sweet Potato Noodles (480 Calories per Serving)

You probably mean 480 kilo calories.

44
zitterbewegung 1 day ago 1 reply      
I'm working on a project and having a API for recipes would be great. I see there is a mention on your ToS but I don't see where you could access it?
45
kbanman 1 day ago 0 replies      
Very well done!

I had started on a similar idea a while back, but never got around to it. Even have a cute domain for it (pifork.com) in case you're interested :)

46
Toast_ 1 day ago 1 reply      
Looks good. I think you should also consider adding the "keto" diet on there as well. Maybe meals < 10 net grams of carbs?
47
damerms 1 day ago 1 reply      
48
thearn4 1 day ago 1 reply      
An analog to Travis CI for this would be interesting and delicious.

An analog to Docker images would be something like Blue Apron or Hellofresh I guess.

49
rcpt 1 day ago 2 replies      
"guthub"
50
williamle8300 23 hours ago 1 reply      
Are sign-ups disabled? Not getting my confirmation email
51
tylerdurrett 1 day ago 0 replies      
Next step: npm cook spaghetti-and-meatballs

In all seriousness though, definitely looking forward to a public API. Great work!

52
dgfgfdagasdfgfa 1 day ago 0 replies      
Looks good!

The mix of Sans-Serif and Serif is a little weird.

53
joombaga 1 day ago 1 reply      
How are you doing volume to weight conversions? Do you have a big table of weights-by-volume for different ingredients?
54
boromi 1 day ago 1 reply      
Does this not support issues and comments? If not those would be welcome additions.
55
m3kw9 1 day ago 0 replies      
I was sold on Forking, I'm sure no pun intended
56
livas 1 day ago 0 replies      
this is a pretty cool thing. just maybe they can change that design. Anyway, i like that.
57
markdown 1 day ago 0 replies      
Viewing source made me sad. One would think that recipes of all things would survive the appification of the web.
58
yellowapple 1 day ago 1 reply      
So I'm trying to submit my cheesy toast recipe as a test. Unfortunately, I can't:

- "1 slice" (of bread) is not a valid quantity

- "to taste" (of black pepper) is not a valid quantity

Oh well. I guess I won't be using this then, at least not yet.

59
k__ 1 day ago 0 replies      
Seeing metric units written out feels kinda odd.

Otherwise very nice idea :)

15
Build a burner phone with Twilio and Kotlin twilio.com
307 points by harel  2 days ago   117 comments top 15
1
kchr 1 day ago 2 replies      
You might want to use the term "single-purpose number" or something similar. Burner phone is colloquially used as a term for non-registered/anonymous phones, which this isn't. Sure it sounds cool but people might assume that's what they are buying...
2
WisNorCan 1 day ago 3 replies      
The challenge for Twilio is that phone #s are a fixed quantity. The same #s have been circulating across lots of different companies including being used by spammers and in marketing campaigns and published across the Internet. Make sure to search the Internet for any phone # you acquire from Twilio before buying it. It may already have substantial call spam/undesirable organic traffic associated with it.

On the flip side, there are already companies out there specializing in monetizing misdials. They specifically look for phone #s that have been retired recently with a lot of volume and then take those calls and resell as leads for cars, insurance, etc.

3
TACIXAT 1 day ago 3 replies      
Recently tried to use Twilio as a burner for registrations (project to set up unique personas online). All modern websites that require a phone number are also using a CNAM lookup and discriminate against VOIP numbers. I won't be ditching my prepaid phone any time soon in exchange VOIP services.
4
packetized 1 day ago 0 replies      
I think that the disconnect here is that this isn't a "burner phone" the way many people think of them, but rather some sort of call/SMS firewall.
5
Endy 1 day ago 1 reply      
At the risk of sounding silly, what advantage does this provide over buying a $20 TracFone for cash and activating it without an account on TracFone.com over a public Wi-Fi?

Or, better yet, not bothering to "activate" it, but using it only over Wi-Fi, creating a new Google account, and downloading Talkatone (or the Hangouts Messenger to create a Google Voice account connected to your new account)?

6
blago 1 day ago 1 reply      
I've been doing this for a long time and there is one big, real-world, gotcha - most business (such as Uber, Lyft, Microsoft) can't send text messages to Twilio numbers. I filed a ticket with their customer support a few years ago and they basically swept it under the rug.
7
vedanta 1 day ago 0 replies      
Is it just me or wouldn't everyone just rather live in a world where privacy is respected by third parties just like a first party would. The do not call law has developed too many exceptions or needs strengthening methinks. If I'm getting a spam call on a burner number vs. my normal number, have I really saved any time? or privacy or security if the number can be reused in malicious ways?

I save numbers for 2nd factor auth, and I seem to get yahoo/msft messages from the same numbers, also github sometimes. Number reuse is definitely a problem. A PKI cert system for numbers/calls would be great to have in this case. I want to know for sure that I'm getting my 2nd factor auth code from msft, regardless of the number they're using.

8
thedangler 1 day ago 2 replies      
Or download a real burner phone app that lets you buy numbers and associate them for different things.There are plenty on the app store for iOS and Android.

Still looks like a fun project.

9
Jamieee 1 day ago 0 replies      
I've been using ring4 for throw away numbers and it's been decent. Plus there are often lots of promo codes to use.

Code for 20 extra credits if you check it out: INV-DKCSGJHX

10
ozfive 1 day ago 2 replies      
Then you'll pay for all the phone calls from marketers. Thanks but no thanks
11
cupcakestand 1 day ago 3 replies      
Nice idea, confusing title (burner phone??).
12
mkez00 1 day ago 1 reply      
Author is building a Spring MVC application but labels it as a "Kotlin app". Is it because Kotlin is the new hotness and the Spring Framework is "old" and "uncool"?
13
Animats 1 day ago 1 reply      
Er, you realize Twilo logs everything, right? You can log into Twilio and read your own logs.
14
SeanDav 1 day ago 1 reply      
Is this a USA solution only, or also applicable in UK, EU, Rest of World?
15
burntrelish1273 1 day ago 0 replies      
bur.nr is available for $679.99 :D
16
$4k Renault compared to Tesla Model 3 thedrive.com
254 points by Osiris30  19 hours ago   234 comments top 37
1
smcg 18 hours ago 12 replies      
One of the reasons why the Renault Kwid is $4000 is because, well, it uses cheap materials, and with that comes a very bad safety rating. This is enabled by India's nearly non-existent car safety regulations. Everything has a "price".
2
dalbasal 4 hours ago 0 replies      
I don't get what this article is trying to say.

In India, he said, you cannot look or be strange. Choosing a new entrant is a risk. You have to be careful with your money. A new product must be different.

Isn't this contradicting itself? What I understood from this article is (1) Renault are selling the cheapest possible standard looking car and (2) They do localized, ritualistic sales & delivery which appeals to their buyers.

Seems to me that this is a story about a car that isn't trying to disrupt anything or be any more radical than absolutely necessary to meet its price/cost goals. It is trying to bring standard low cost Renaults to India.

The article mentions the Tata Nano, a more radical low cost concept. It could also have mentioned the Twizy, Renault's current attempt at a tiny lower cost car radically different from other models. Both of these stretch the standard definition of "car" and you might call them disruption attempts. The Kwid is explicitley trying to do the opposite.

The current suite of car types (budget hatchback, family sedan, SUV..) is fairly stable. This is probably because use cases have remained the same, the technology has remained similar & underlying economic factors of production haven't changed much.

Two things could change that: EV technology & AV technology, especially AV.

For example, if most passenger cars are autonomous taxis, we might see more specialized designs. Slow, single seaters for urban travel. Larger comfort vehicles for longer distance travel.. etc. That would be disruptive.

3
krishicks 18 hours ago 8 replies      
Crash test results: 0 stars

https://youtu.be/jePu-6TxypI

Reminds me of the Tata Nano (also 0 stars): https://youtu.be/buMXtGoHHIg

4
thinkloop 17 hours ago 2 replies      
So painful to read this article. How did it get on the front page? Is it that it's funny how confidently the author has no idea what "disrupt" means?

Summary: new car introduced at same price as current leading car in category, but with a bit more room and slightly nicer, and they give you cake when you pick it up.

Literally, that's the whole article.

5
jefb 18 hours ago 3 replies      
What Is Disruption, Anyway? ... In India, where the average wage remains a fraction of those in the first world, it starts with an affordable car that isnt a complete piece of junk.

5 Paragraphs later:

It would be the same price as the sector-leading Maruti Suzuki, but with more space. It would include a 7 infotainment system with a touchscreen. It would have real ground clearance. It would resemble a smaller version of Renaults wildly popular Duster SUV.

New sexy features don't make something disruptive. Sure, it may pan out to be a popular vehicle, but disruptive? because it has infotainment?... spare us.

6
renaudg 13 hours ago 4 replies      
Anyone else feeling slightly uneasy with the exciting and "disruptive" plan of ushering a population of 1.3 billion into the glorious era of the fossil-fuelled car ?

Seems exactly what the world needs right now.

7
mikepurvis 1 hour ago 0 replies      
Note that the author is Alex Roy of Cannonball fame.

https://www.wired.com/2007/10/ff-cannonballrun/

8
dgudkov 15 hours ago 0 replies      
This article should be viewed not from a point of technology disruption, but from a point of marketing disruption. The correct analogy would be with Lexus rather than Tesla (apparently the latter was brought in purely for extra clickbaitness). Lexus offered no technical innovation, however gained supply chain optimizations allowed Toyota to launch a car that offered significantly more value per money then it was common for the market at that time. A flavor of exclusivity together with non-exclusive price tag made Lexus a huge marketing success. So is the Kwid marketing disruption? Could be. If it's a right product for the right market - why not? Is the parallel with Tesla relevant? Absolutely not, because Tesla is known first of all as a technology break-through. Of course, good marketing & PR played huge role for the Tesla's success, but it's still secondary. All in all, the article is about a successful product/market fit case. It has nothing to do with technology advances.
9
EngineerBetter 8 hours ago 4 replies      
Kinda off-topic: I own a Renault Zoe, which is an EV hatchback that you can get second-hand for about 4k, and also a Tesla Model X 90D.

People get far too excited by Tesla's brand, when there are really competitive EVs out there.

Feel free to AMA about EVs or how mine compare.

10
justin66 18 hours ago 0 replies      
"How are we going to get anyone to read this article we wrote about a boring, shitty car? I know, gratuitous Tesla comparisons!"
11
touristtam 18 hours ago 1 reply      
I completely disagree; Renault when buying Dacia, bought a company capable to produce a compact car at a ridiculously low price on an industrial platform that was proven (Clio) and on a market segment that is the bread and butter of all major European car manufacturer. However the same car was sold at almost twice the price within the EU single market as it was outside it. Before that, you had the experience of FIAT with a single platform produced in Europe (FIAT Uno) and South America (Fiat Palio). It was a commercial success for the company. However, the price for the Palio was lower than the Uno, as it was targetting developing market.The more mature market are not benefiting from technological development from those cars, as they are based on proven platform (read already paid for), or from the price reduction, as the competition is locked in place, and an aggressive price politic might actually be detrimental to the product perception. This is without even mentioning the cartel like behaviour or European car manufacturer.
12
pavel_lishin 18 hours ago 4 replies      
That ceremony looks monstrously tiring to an introverted westerner like me. I don't want to cut a cake and watch a priest bless my car; I want to walk onto a lot, look over the car I pre-selected online to make sure it's the right one, and then give someone some money and drive away.

I'm also wondering how long that friendly attitude - and the traveling pop-up repair program - will last. Once everyone wants one of these, you no longer have to cajole people.

13
zwieback 18 hours ago 2 replies      
They mention using a single Indian supplier and designing for that supplier as an advantage. Where I work we wouldn't be allowed to do that - always want at least two different suppliers so we don't put ourselves at risk.

Hope it works out for them - seems like a fun little car.

14
5_minutes 18 hours ago 3 replies      
I was just watching on ViceLand an episode about the Indian (non-existent) sewage system. Millions of people there have no sanitation facilities and deficate anywhere, where possible. And are living of a few dollars a month. It's really worthwhile watching, btw.

So talking about "quality cars" for $4000 here seems totally surreal. And that that would be disruptive is even more of a strange observation.

The author sure has an interesting story to tell, but the conclusions, title etc of the article shows of actual lacking good journalism/storytelling.

15
zubairq 8 hours ago 1 reply      
To say that this is unsafe is incorrect. It would be unsafe in the west compared to other cars, but in India compared to other cars and motorbikes that riders have as an option the Renault actually INCREASES safety dramatically
16
StreamBright 18 hours ago 2 replies      
Is a cheap car what we call disruption in 2017, really? We know that cars do not scale, look at LA traffic. Seems like a pretty bad idea to replicate the same problems to the 3rd world.
17
apo 17 hours ago 0 replies      
This doesn't sound like marketplace disruption as described by Christensen.

Under his model, the marketplace disruptor attacks from the bottom. The challenging product is demonstrably, objectively worse in one or more ways than the incumbent.

If introduced into the US, the Kwid would indeed be a marketplace disruptor. But in India, where the article itself points out that relatively few cars are driven, the Kwid is a luxury item.

In this view, neither anything Tesla has offered nor the Kwid should be considered disruptive. They're both vulnerable to a determined, capable incumbent.

18
kn0where 18 hours ago 0 replies      
On the plus side, given India's traffic, maybe this criminally-unsafe car will rarely drive fast enough to be dangerous?
19
rwmj 18 hours ago 7 replies      
This is something I've always wondered, why are even the cheapest cars (in the Western world) so expensive? Isn't it possible to make a new car under ~ $10,000?
20
walrus01 10 hours ago 1 reply      
One thing that is not mentioned in the article at all, other than the existence of scooters, is that specifically a huge number of them are 2-stroke gasoline engines which are incredibly polluting. It's worth having a discussion about the environmental implications of a growing middle class in south asia buying cars, but at least if scooters are replaced with cars such as this on a 1:1 basis, the air will be a LOT cleaner. The air in Lahore or New Delhi on a typical day is a blue-gray haze full of two stroke engine exhaust. The typical two stroke scooter pollutes more than a giant american gas guzzling SUV from 20 years ago.
21
vivekd 6 hours ago 1 reply      
So wait, this means that if it weren't for safety and emission standards I'd be able to go out and buy a new car for 4000 dollars and a used car for a tiny fraction of that? I get that others want safety, but for me that's not as important as avoiding debt. Shouldn't I have the right to make that choice on my own?
22
hnnsj 15 hours ago 0 replies      
I confessed I skimmed most of the article, but am I fair if I say that it makes the point that electrification is just a nice-to-have feature for rich western hipsters? And nowhere does it raise the issue of the global environmental issues if millions and millions of additional combustion engines? At least there's cake...
23
MBlume 14 hours ago 0 replies      
I think this article would be substantially improved by removal of references to Tesla or "disruption".

The stuff about the Renault is interesting, but all the "what these unthinking Musk fanboys don't realize..." is obnoxious and doesn't add anything to the article.

24
joekrill 4 hours ago 0 replies      
> Ask the clickbait mills and the sheep who retweet them...

Surely the irony must be intentional.

25
gambiting 7 hours ago 0 replies      
All the article is telling me is that American dealership experience must be incredibly shitty. We bought a brand new VW Polo here in the UK(and Polo is the cheapest VW car you can buy here), on the delivery day it was waiting for us in the dealership wrapped in ribbons, my partner got a massive bouquet of flowers, there were cards saying "happy new car day <partner's name>", and two sales assistants spent a lot of time with us showing us every bit of the car. That sounds pretty similar to the experience described in the article, minus the religious parts.
26
dsego 13 hours ago 0 replies      
Funny, yugo cost that much in the USA 30 years ago. Btw, yugo was an icon, this is shite.
27
babesh 12 hours ago 1 reply      
Why dont you just limit the speed of the car to how much impact it can take without seriously injuring the passngers and possibly pedestrians? That car cant take a 40mph impact but maybe it can take a 25mph one?
28
nicolashahn 18 hours ago 2 replies      
I wonder which will make more money, the cheap car sold by the millions or the (relatively) expensive car sold by the thousands.
29
blunte 10 hours ago 0 replies      
The problem of car safety isn't solved by the Renault driver driving slowly. The risk is still significant when the speeding large truck fails to stop/yield (is that even a concept?) and plows into the car.

Meanwhile, look at all the energy (and human time) being put into selling one car, but the streets and surrounding areas are covered in trash and worse.

I would much rather see these car salespeople doing what "The Ugly Indian" (see Facebook for group) is doing - cleaning up and decorating disgusting areas and teaching people by example how not to trash their home environments.

30
yitchelle 18 hours ago 1 reply      
That delivery ceremony is amazing. Are the salaries so low in India that the car dealers can afford to have such a ceremony for every new car delivery?
31
jayeshsalvi 14 hours ago 0 replies      
Why comparision with Model 3? It's not electric. The blogger seems to have only anti-tesla agenda.
32
thewhitetulip 8 hours ago 0 replies      
I am still trying to understand how a 0 rating crash test car is being compared with one of the safest car? Am I missing something? Is this a sarcastic post?
33
snambi 18 hours ago 0 replies      
clickbait.
34
5_minutes 17 hours ago 0 replies      
There is surely something to be said about Renault as a company brand's responsability and ethics here.
35
mdekkers 6 hours ago 0 replies      
The only thing a Renault will disrupt is the reliability of your transportation, and your bank account when you go for repairs. As a current Renault owner I have been forced to become an expert in automotive technology.
36
ensiferum 18 hours ago 5 replies      
Why can't take they make a car like this for the European market? Not everybody wants to spend +20k on a car. Somehow I feel this would probably cost +10k in Europe so what gives?
37
dsfyu404ed 18 hours ago 0 replies      
>In the US, everything about the car ownership experience from research to negotiation to delivery to service has been utterly and depressingly commoditized

Clearly this guy hasn't walked into a Chrysler dealer and asked "How much horsepower can I get with four doors?"

17
LeoCAD A CAD program for creating virtual Lego models github.com
276 points by app4soft  1 day ago   65 comments top 21
1
app4soft 1 day ago 0 replies      
Starting from 2017 developers changed numbering of LeoCAD, so now (after 0.83.x version) it look like YY.MM - 17.07.

This is second release in 2017, and must say that LeoCAD grow up - "Parts" toolbar has so many modes for preview LDraw parts (starting from 17.02). And for now models in main window could be displayed with shadows and blinks (starting from 17.07) - just go to menu "View -> Preferences... -> Rendering" and switch on "Enable lightning".

If you has any issues post them in tracker[0].

LeoCAD has very simple to understand & use UI, so kids could use it after one hour learning using Basic Tutorial[1].

Don't forget read the docs about Texture Mapping[2] and Meta Commands[3] tags (could be stored inside .mpd/.ldr/*.dat), that give you additional features on customizing your LEGO model.

Fresh 'nightly' builds of LeoCAD for many Linux-based distros placed on OBS[4]. Also on this OBS you could find builds for Lpub3D, Lpub, LDglite, LDView and LDraw library.

Call for Developers: if You know C/C++ & Qt, please, help make this program better! Help us solve unclosed issues or propose any ideas on how to improve LeoCAD! ;-)

[0] https://github.com/leozide/leocad/issues[1] http://www.leocad.org/docs/tutorial1.html[2] http://www.leocad.org/docs/texmap.html[3] http://www.leocad.org/docs/meta.html[4] http://download.opensuse.org/repositories/home:/pbartfai/

2
Sknowman 1 day ago 2 replies      
Lego had its own digital designer, I remember playing around with it a really long time ago, seems its still around.http://ldd.lego.com/en-us/
3
bhouston 1 day ago 2 replies      
How does this compare to Mecabricks online? https://www.mecabricks.com/

And the new tool from Stud.io from Bricklink? https://studio.bricklink.com/v2/build/studio.page

4
StavrosK 1 day ago 1 reply      
This is fantastic, I remember using Ldraw to create a 3D model from the LEGO blueprint when I was younger. It's great for testing out designs if you don't have the parts, and I remember that the software could print whole step-by-step blueprint books for you.
5
santaclaus 1 day ago 2 replies      
Wow super cool! They should have a hot swappable rendering backend so we can get globally illuminated PBR renders out. Given the limited materials that Lego bricks are made out of, getting sensible default materials and automated lighting would be hella easy.
6
Impossible 1 day ago 2 replies      
If I had side project time I'd really like to make a good VR lego CAD program. I have seen a couple of demos and prototypes and lego has a daydream app, but all of it seems limited and unpolished compared to LDD, LeoCAD etc.
7
yann63 1 day ago 1 reply      
Is there a software to "manage" a library of LEGO parts, which would then tell me what I can build?

And is there a software to easily feed/initiate this library? Maybe with something as simple as taking a photo of spread parts on the ground.

Am I dreaming?

8
erwoe 1 day ago 0 replies      
Does LeoCAD implement flexible items that can be bent, such as the Space Needle stems (https://shop.lego.com/en-US/Seattle-Space-Needle-21003)?
9
w0utert 1 day ago 0 replies      
Very cool, I've actally thought at making something like this using the OpenCASCADE CAD kernel at some point (I'm familiar with that because we are using it at work).

Does LeoCAD itself have built-in capabilities to create the Lego's, using a CAD kernel? Or are they pre-modeled in an external tool? Typical Lego bricks are simple enough to build using basic Boolean geometric operations, so with just a few simple rules the application could define a vast library of bricks to use for building.

I tried to figure this out by looking at the source code, but couldn't find it, so maybe one of the authors can comment on this? Where is the brick data coming from?

10
yodon 1 day ago 1 reply      
I recall that one of the LeoCAD-style brick based editors had an option to remove all the top and bottom connection features for export to .OBJ or similar, simplifying the shapes to boxes but reducing the poly count enormously.

Was that something LeoCAD could do or would I need to use a different tool for that? (This wasn't a general purpose hidden surface removal, it was just removing all of the parts of the bricks associated with ensuring physical connections)

11
floor_ 1 day ago 2 replies      
I remember reading a mulitplayer Lego game dev talk about how her team spent most of their time creating algorithms to detect penis shaped block sets for censorship reasons.
12
jve 1 day ago 4 replies      
I'm wondering about the legal side of this. Is it OK from LEGO side to have stuff like this and other similar tools?

And what if when you start printing stuff like that?

13
kuon 1 day ago 0 replies      
LDCad[1] is the only one that won't lag with big models. The UI is a bit weird, but it works very well.

I tried LeoCAD, but it was either lagging or crashing with 1000+ pieces.

[1] http://www.melkert.net/LDCad

14
CharlesDodgson 1 day ago 1 reply      
I wonder is it possible to create something here and drop it in an AR environment.
15
Raphmedia 23 hours ago 1 reply      
What is the output? Could you build 3D models with "virtual legos" and 3D print them easily?
16
King-Aaron 1 day ago 1 reply      
Well, that's my productivity done for the rest of the day
17
pbhjpbhj 17 hours ago 0 replies      
FWIW version 0.83.1 (October 2016) is available in Ubuntu repos.
18
mch82 1 day ago 1 reply      
Cool concept! Sent over a couple pull requests for the docs intended to be helpful, but okay to ignore.
19
boobsbr 1 day ago 0 replies      
Wow, that brings some memories. I spent quite a while playing with it in the early 2000's.
20
reitanqild 1 day ago 1 reply      
For some reason github shows me a 404 when I try to see the full README.
21
Hydraulix989 1 day ago 1 reply      
The one I've been familiar with (since the early 2000s, really) is LDraw:

http://www.ldraw.org/

These guys take Lego SERIOUSLY.

18
Transmit 5 panic.com
296 points by lorenz_li  1 day ago   140 comments top 34
1
rvanmil 1 day ago 1 reply      
Fantastic app, instabuy. I'm really happy to see these kinds of native Mac apps being successful for so long. They're a breath of fresh air amidst all the Electron crap lately.
2
drcongo 1 day ago 1 reply      
I'm buying this just for it not being a monthly subscription.
3
Osmium 1 day ago 0 replies      
Congratulations Panic :)

One of my favorite Mac app companies (along with The Omni Group and, more recently, Affinity). I always know I'll be paying for quality, polished software with Panic, and I've been looking forward to this Transmit update.

4
sergiotapia 1 day ago 1 reply      
Their entire interface and UX persona reminds me of better times when skeuomorphic design reigned supreme. Now all we get is boring flat with single color highlights. Transmit 5 looks fantastic!
5
mikepurvis 1 day ago 6 replies      
Stick around at the top of the page so you don't miss out on the gratuitous rotating 3D truck of awesomeness.
6
blacksmith_tb 1 day ago 3 replies      
Transmit has always been slick, but it seems like Cyberduck[1] might have stolen a fair chunk of their clientele? I find it pretty useful on macOS (and/or things like yafc and ncftp on Linux).

1: https://cyberduck.io/

7
nathancahill 1 day ago 1 reply      
Used it when I switched from Fetch[0] back in the day when PHP code was deployed with FTP. Great client, definitely the most "native" feeling FTP app I've used. Now I mostly use it for S3, which is very well supported.

[0]: Throwback http://vintagemacmuseum.com/wp-content/uploads/2010/05/Fetch...

8
aezell 1 day ago 2 replies      
I haven't used Transmit in a while as I don't have a need for it, but when I did it was a great client.

The Panic app I miss most is Unison. Well, miss in the sense of miss it getting updated. It's still available.

Years ago, Unison and a fat Giganews subscription were fantastic ways to discover music.

9
ivanhoe 1 day ago 1 reply      
Funny how much git changed how we do things. Transmit was one of the apps that I've always had opened on my laptop, and now haven't touched it at all for more than a year.
10
dangayle 1 day ago 0 replies      
What timing! I was just telling my coworker this morning that Transmit was the best money I ever spent on tools I use for web development. I've been using Transmit for a long, long time and I still feel like I haven't fully utilized it.

Instabuy for me.

11
Exuma 1 day ago 0 replies      
I've been a fan all the way since the beginning... I'll buy this even though I don't even use FTP and whatnot much anymore. Just for the extreme value this app gave me many years ago when I was getting started.
12
atYevP 1 day ago 1 reply      
Yev from Backblaze here -> good news! Backblaze B2 as a destination ;)
13
dmix 1 day ago 1 reply      
They also launched a new sync service which automatically encrypts your files clientside: https://panic.com/sync/

It's great to see encryption is becoming standard practice with new services.

14
deanclatworthy 1 day ago 1 reply      
> And yes, Transmit still handles the classics FTP, SFTP, WebDAV, and S3 better than any. We make complex services drag-and-drop simple.

I love Transmit, but S3 has been broken for a long long time on Transmit 4 [1]. Is it now fixed in 5?

[1] https://twitter.com/derscuro/status/525570239120285697?lang=... (There are earlier references to this issue than this).

15
pier25 1 day ago 3 replies      
Finally.

I've been using Transmit for 10 years and had already moved to Forklift since the Transmit 4 engine was so slow.

Transmit 5 looks awesome and I only miss access to Google Cloud Storage which surprisingly only Cyberduck supports.

16
bleomycin 1 day ago 0 replies      
I didn't see any mention of segmented download support via sftp? This is something lftp and smartftp support but very few other clients do.
17
bdcravens 1 day ago 1 reply      
Looks like it's no longer on App Store (not surprising or disappointing, though it was convenient when I moved to a new machine)
18
breadmaster 1 day ago 0 replies      
Always worth the money I've spent for a Panic app. Coda 2 got me through my previous gig as a web dev.
19
danpalmer 1 day ago 1 reply      
I love Transmit, and version 4 served me well, but I've used it less and less over the years to the point where I don't think I'm the target market, as a web developer, anymore. I wish I had a reason to use this, but I can't find one.
20
favorited 1 day ago 0 replies      
Pretty awesome that they've been building the "same" app for 20 years since MacOS 9!
21
gabrielcsapo 1 day ago 0 replies      
The mac application company, they are the reason I started programming!
22
aidos 1 day ago 0 replies      
When I moved to a Mac (2004/5) Transmit was one of the first bits of software I purchased. It was a bit of a revelation to discover that software could be so lovely- it really added to the joy of using a new machine.

Having said that, nothing was ever as fast as LeechFTP I used on Windows [http://www.leechftp.de]. That thing was magic - no ftp client has ever felt so fast.

23
terinjokes 1 day ago 1 reply      
This says it supports "Amazon S3". Does anyone know if they allow you to configure the endpoint, and thus use an S3 compatible store[0]?

[0]: https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing...

24
bdcravens 1 day ago 1 reply      
Glad to see them offer more cloud options (in addition to s3: Google, Dropbox, etc). While perhaps Transmit is "prettier", in recent years there have been many more complete offerings from their competitors.

Looks like they've made some S3 enhancements (which is my primary use). I hope they updated with support for KMS-encrypted files.

25
acomjean 1 day ago 0 replies      
I've use transmit for over 10 years.. (Yikes). Was just wondering if this was going to get an upgrade.

Same price new/upgrade. Its been 7 years since they've upgraded the previous version so thats fair. I like the "Sync folders" feature quite a bit.

They started making games, and was wondering if thats where the company was headed..

26
nbrempel 1 day ago 0 replies      
I'm always impressed with Panic's software. I'm sure this won't be any different!
27
rangibaby 1 day ago 0 replies      
I bought Transmit 4 in 2011-ish and was happy with it. The major missing feature (IMO) was segmented downloading. I switched to lftp and never looked back.
28
wooptoo 1 day ago 0 replies      
The remote-remote feature is great! Does any other client have such a thing? I basically need Google drive -> ftp.
29
_Codemonkeyism 1 day ago 0 replies      
We use mostly Forklift here for uploads/downloads, the two pane view is a favorite of mine since Norton Commander.
30
nodesocket 1 day ago 2 replies      
Can I backup my macOS Time Machine to Google Cloud Storage using Transmit? I'd like to store my Time Machine backups offsite just incase.
31
jshelly 1 day ago 1 reply      
Any way to add SMB? I don't see it in the list of options and I need to copy files to a Windows server.
32
overcast 1 day ago 1 reply      
FINALLY. So frustrating having to wait all of this time. I couldn't get a definitive answer from them on whether a Trasmit 4 license purchased at present, would be eligible for a Transmit 5 upgrade.
33
smacktoward 1 day ago 3 replies      
I wish there was a universal file-transfer app like this on Windows & Linux. The best cross-platform solution I know of is FileZilla, and even that (1) only does FTP/SFTP, leaving out S3 and all the other services and (2) features some utterly baffling design decisions (ahem: https://trac.filezilla-project.org/ticket/2914) that make it more or less unusable for serious work.

Sigh.

34
DavideNL 1 day ago 0 replies      
<just kidding...>

What a surprise it's not a monthly subscription!

</kidding>

19
Scientists Reverse Brain Damage in Drowned U.S. Toddler newsweek.com
268 points by Deinos  3 hours ago   103 comments top 16
1
nerdponx 2 hours ago 7 replies      
This is approaching Star Trek levels of medicine. Congratulations to the team who discovered and pull this off, and of course my heart goes out to the family and their child. Drowning is very serious and very scary.

Edit: somewhat unrelated since this girl fell into an unattended pool, but it's important to know the signs of drowning, which are not what you see in movies: http://www.cbsnews.com/news/how-to-spot-signs-of-a-child-dro...

Edit 2: I get that people have a right to downvote whatever they want, but seriously, did I say something wrong here?

2
davidiach 2 hours ago 1 reply      
>Concluding, the researchers say that to their knowledge, this is the first reported case of gray matter loss and white matter atrophy (types of brain damage) reversal with any therapy and that treatment with oxygen should be considered in similar cases. Such low-risk medical treatment may have a profound effect on recovery of function in similar patients who are neurologically devastated by drowning."

I always believed that brain damage cannot be reversed. If version 1 means reversing it in toddlers, maybe version 10 will do miracles for many other people. Truly amazing and congratulations to the medical team!

3
madilonts 1 hour ago 3 replies      
Well, this event happens enough that it might be worth studying the benefit of oxygen therapy, but I'd be very careful about the conclusions you draw from this.

Maybe the oxygen had a substantial positive effect, or maybe the child would've recovered on her own. We really don't know, since there are other reports of children who have good neurological outcome despite terrible prognosis [1] [2].

I'm suspicious because of the unusual and/or stereotyped responses in the Medical Gas article and the linked YouTube videos: "doctors said she had 48 hours to live" (doctors don't say things like that) and "this demonstrates that we're inducing 8101 genes!" (ummm, OK...), etc.

Also, be suspicious when something like this hits all the pseudo-news sites simultaneously. It reminds me of the articles that go something like "16 year-old cures cancer...DOCTORS HATE HIM!".

Finally, I'm very happy this little girl has been given a second chance and hope for her continued recovery. However, don't forget that a toddler was left unsupervised and submerged in a pool for 15 minutes. Some people call that an accident; some people call it neglect.

[1] https://www.ncbi.nlm.nih.gov/pubmed?term=3379747

[2] https://www.ncbi.nlm.nih.gov/pubmed?term=10665559

4
amykhar 1 hour ago 3 replies      
What frustrates me is that in the United States, most insurance companies won't pay for hyperbaric oxygen treatment for traumatic brain injuries. My son, 26, was injured in a car accident last November. I would love to be able to get Oxygen therapy for him, but cannot.
5
matt4077 1 hour ago 3 replies      
> was in the 5 degree Celsius water for up to 15 minutes before being discovered.

as my professor used to say: If you're going to drown, drown in almost-freezing freshwater.

6
blauditore 8 minutes ago 0 replies      
First paragraph:

> she spent 15 minutes submerged in a swimming pool

This seems highly implausible, given she survived. Also, how would they know the moment she dropped in?

Further down:

> up to 15 minutes

Ah ok. From what I know, brain damage starts occurring even after 2-3 minutes without air (for adults), so I suppose it was rather on the lower end. Does anybody know a bit more about this?

7
slr555 1 hour ago 1 reply      
Drowning is from a medical standpoint more complex than the simple notion I grew up with which was in essence "water fills your lungs so you can't breathe air".

In fact drowning does not require filling the lungs completely. Even a volume of a few milliliters/Kilogram of body weight is enough to cause drowning. Additionally, drowning can cause serious damage to the lungs themselves even if the patient survives initial attempts at resuscitation. The alveoli (functional unit of the lungs) are lined with a surfactant that is critical to the exchange of air to the blood stream. Water can severely disrupt the surfactant and impair function not just while the water is present but until the body is able to restore the surfactant layer. Damage to the patient's lungs in this case seems to have been mild enough that the oxygen therapy could do it's job.

Also notable is the 5 degree celsius water temperature (41 degrees Fahrenheit). This water temperature compared with the temperature of an olympic practice pool (~76 degrees Farenheit) is cool enough (though not as cold as many other reports) to trigger the so called "diving reflex" where stimulation of thermo-receptors in the skin triggers a vagal response that shunts blood away from the periphery and to vital organs.

Minimal surfactant damage and the diving reflex (as well as the patient's age) seem likely to some degree to have facilitated successful treatment of the patient.

8
mechnesium 1 hour ago 0 replies      
This is really awesome. I am curious if this therapy would have been augmented by cognitive enhancers or nootropic substances such as piracetam. Piracetam in particular exhibits neuroprotective effects and improves cerebral vascular function. Several studies have found it to improve recovery following acute/transient ischemic stroke. It has actually been prescribed in several countries for this purpose.

References:https://www.ncbi.nlm.nih.gov/pubmed/22972044https://www.ncbi.nlm.nih.gov/pubmed/10338105https://www.ncbi.nlm.nih.gov/pubmed/9412612https://www.ncbi.nlm.nih.gov/pubmed/9316679

9
rhinoceraptor 26 minutes ago 0 replies      
It would interesting to know if better results could be obtained using even more oxygen, in combination a ketogenic diet/exogenous ketones (which would negate the risk oxygen seizures).
10
samfisher83 1 hour ago 0 replies      
It seems like they fed the body a lot of oxygen and the body healed itself. I think the body is pretty amazing at regeneration when we are young.
11
mabbo 1 hour ago 2 replies      
I was worried this would be a case of neural plasticity, where the brain just rewires itself around the damage (which is a thing, and it's super cool). But then I read this part:

> An MRI scan a month after the 40th HBOT session showed almost complete reversal of the brain damage initially recorded. Researchers believe the oxygen therapy, coupled with Eden having the developing brain of a child, had activated genes that promote cell survival and reduce inflammationallowing the brain to recover.

We can reverse brain damage. Wow.

12
sunwooz 1 hour ago 0 replies      
Is there data out there about infants in a similar situation who didn't receive oxygen therapy? Is it possible that the developing child brain is what almost solely caused the improvements?
13
timcamber 1 hour ago 3 replies      
This is amazing. Does anyone think the cold temperature of the water (5C) had anything to do with the feasibility of recovery? I don't necessarily have a reason to think it would be beneficial or not, just a thought that crossed my mind. I don't think it was mentioned in the article.
14
zeveb 36 minutes ago 0 replies      
Egad the JavaScript on that page is terrible! Every time I scroll to read the first paragraph, it hides the video or something, causing it to scroll away.
15
TurboHaskal 1 hour ago 2 replies      
How is nationality relevant?
16
ilitirit 2 hours ago 6 replies      
Does drowning not imply death? Is there different definition for drowning (or death) in medicine?

EDIT: I'm referring to the fact that the title says the girl drowned, not that she was at some point "drowning".

20
Law enforcement took more stuff from people than burglars did last year (2015) washingtonpost.com
234 points by ryan_j_naughton  2 days ago   38 comments top 7
1
lettergram 2 days ago 2 replies      
Illinois just put the burden of proof on the prosecution (as opposed to the item(s) to prove their innocence)

http://ij.org/press-release/illinois-overwhelmingly-approves...

2
dmux 2 days ago 3 replies      
CT has banned civil forfeiture without a criminal conviction. A step in the right direction.
3
honestoHeminway 2 days ago 1 reply      
The Firefighters

The Firefighters' Guild has been formed and dissolved repeatedly throughout the history of Ankh-Morpork. Usually formed in response to fires which cause significant damage to large parts of the city, the guild is usually dissolved in response to... er, fires which cause significant damage to large parts of the city. The Guild suffers from the undying capitalist spirit of Ankh-Morpork, as those men who are paid per-fire extinguished eventually begin to guarantee a regular supply of fires to be put out. This has led to the frequent destruction of large portions of the city and ultimately to the Guild's being banned.

4
cmurf 2 days ago 1 reply      
We have an AG who wants to take even more stuff from people. Highway men. That's what this country needs.

We hope to issue this week a new directive on asset forfeiture especially for drug traffickers, Sessions

5
dovdovdov 2 days ago 2 replies      
also catching up with the murder rate...
6
lr4444lr 2 days ago 3 replies      
I detest civil forfeiture, but this is clickbait. The burglary figure is only reported theft. Also, the actual metric is value of goods, not number of takings. Obviously, assets in forfeiture are going to be bigger on average several times over than what a thief typically gets when rummaging a house in a 2 minute panic to get out before the authorities are signaled.

EDIT: Also, if you read the article, it buries even one of the few redeeming aspects of civil forfeiture, payments to victims. If you buy a car and the police seize it because the VIN is tracked to a vehicle previously owned by a drug dealer who used it to ply his contraband but then sold it to pay his way as a fugitive, and once apprehended the car is then auctioned off and money is paid to the family of a victim he earlier had killed, that's part of a much bigger legal process. The remedy for the unfortunate person who bought the vehicle unawares in good faith is to go to his dealer and demand a refund or take him to court for misrepresenting the car, or if the paperwork looks good on the part of the dealer, he'll have to rely on his loss insurance.

7
tbcj 2 days ago 1 reply      
State and local governments are, at least, partially to blame for the increase as a consequence of decreased budgets given to law enforcement, including district attorneys' offices. That money is often used to make up budget shortfalls, and often nearly doesn't provide enough to competently and fully staff such offices.
21
Apollo An open autonomous driving platform github.com
240 points by KKKKkkkk1  1 day ago   71 comments top 13
1
rwmj 1 day ago 3 replies      
These open source driving platforms are an interesting way to test out the limits of liability disclaimers on software. This license has:

 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

2
Animats 1 day ago 2 replies      
Autonomous driving based purely on machine learning from vision is scary. Machine learning is, after all, a statistical method. It's going to do really great most of the time, and really badly once in a while.
3
siliconc0w 1 day ago 0 replies      
It'd be cool to have some sort of independent 'weissman score' style benchmark for these systems. Maybe just RMSE or similar against ground truth steering/throttle over a battery of different environments/weather/terrain. It looks like it uses LIDAR and Baidu has a pretty impressive AI team so it'd be really interesting to see how they stack up against, say, Comma AI's openpilot or Tesla's autopilot.
4
bradhe 1 day ago 3 replies      
How many more projects are going to get launched with the name "apollo"?
5
the_common_man 1 day ago 0 replies      
This is by the Baidu AI team.
6
rottyguy 1 day ago 3 replies      
I'd imagine this could work on a small fleet of miniature cars (rc size?). A model town/city could be built with various obstacles for training a large amount of the ai, I would think.
7
ingenieroariel 1 day ago 1 reply      
The list of partners looks quite interesting, including Ford, Bosch and Delphi.

It is apparently based on ROS like Autoware.

8
kensoh 1 day ago 0 replies      
Interesting, surprised that this isn't submitted yet. This is the project by Baidu. Thanks OP!
9
nnm 1 day ago 0 replies      
Is it functional as of today? The perception directory is almost empty (only skeleton).
10
alexhornbake 1 day ago 1 reply      
Maybe I'm missing something, is this is a platform strictly for developing ML techniques? Or is it intended to actual run in a vehicle... on ubuntu, in a docker container?

I'm no expert, but I would think you'd want a realtime OS for this. Right?

11
beachbum8029 1 day ago 2 replies      
Awesome. Now to just get a couple LIDAR cameras...
12
acover 1 day ago 2 replies      
Would this work on an electric bicycle?

Edit: or more stable tri-cycle/quad-cycle

13
vswar 1 day ago 0 replies      
An open autonomous driving platform - seems great.
22
A decentralized Bitcoin exchange github.com
271 points by ColanR  2 days ago   99 comments top 18
1
olegkikin 2 days ago 2 replies      
2
runeks 2 days ago 2 replies      
The fundamental problem is that practically all exchanges are credit-based, and decentralized credit doesn't make sense.

When you want to purchase bitcoins on an exchange, you start out by depositing e.g. dollars on the exchange, thus converting your dollars to that exchange's dollar token (IOU x USD). You then sell your dollar tokens for bitcoin tokens on the exchange, which is super fast because it centrally controls token ownership, and when you want to get your bitcoins out you sell/redeem your BTC tokens for actual BTC.This system, which is how all large exchanges work, is inherently centralized, because it uses a central debitor (owes USD/BTC to customers) to handle token ownership, which works really well because it puts no limit on performance (as opposed to a system where we use a blockchain to define ownership).

One partial solution is to separate the debitor (funds holder) from the actual exchange: instead of exchanges issuing their own tokens (bitstampBTC, bitfinexUSD, mtggoxBTC, etc.), there would be dedicated issuers (lots of them), and an exchange would constitute a central meeting place, which accepts a specific set of issuers, and to whom people can sign over their tokens in order to engage in a trade. So, for example, a buy order would constitute signing over a token worth the buy order's value to the exchange, which immediately matches it with a sell order, swaps the owner of the matched tokens (after deducting a small fee), and sends the tokens to the buyer and seller.

3
contingencies 2 days ago 0 replies      
Fiat transfer systems are typically unreliable. They can be intercepted, halted, delayed, reversed, and generally cannot be considered objectively predictable, with a wide variety of unique and nontrivial failure modes - not all of which are recoverable - and no objective SLA / service description.

The problem with assuming good faith and using actor reputation (even third-party arbitrated) is that, in becoming a trusted actor, the amount of money available for cut-and-run scenarios increases exponentially (both for arbitrator and actor), until it ultimately makes sense and happens (eg. numerous scam darknet markets, etc.)... often the claim is "sorry we got hacked!"

Using real world user identities as insurance has the issue that using one's fiat bank account to perform automated or semi-automated trades on behalf of others is probably dubious to against terms of service, or at a minimum vaguely arguably so when politically expedient. Therefore, revealing the real world user identity of an accused bad actor (ie. fiat account holder) as an insurance against bad behavior is likely to expose them to an undue scale of legal hassle and/or asset seizure, which is not something wise to trust a third party with no matter how trustworthy the arbitrators are supposed to be.

My gut feeling is that such systems work only at small scale, with a veneer of trust that can be established in different ways: deposit is placed with counterparty, reputation within some shared community, mafia boss will murder you if you rip off the system, etc. Between absolute strangers, it is exceptionally difficult to reliably scale, even if you can establish it.

Finally, an important point is that frequent <1BTC transfer activity to random destinations on conventional fiat accounts are likely to trigger bank anti money laundering (AML) heuristics.

4
Jaepa 2 days ago 2 replies      
This is interesting, but I could see some possible issues.

It would be fun to ask the Devs some questions.

eg: if peers are able to select their arbitrators, how do you prevent a peer and & arbitrator from gaming the system. There is a secondary arbitrator but from the docs it looks like after the initial arbitration the funds are released.

Is there a way to protect against root DHT node hijack? Only refernce I see to this is a TODO: See how btc does this.

5
benjaminmbrown 2 days ago 1 reply      
Etherdelta has been doing this for ERC20 tokens for some time: https://etherdelta.github.io/
6
Uptrenda 2 days ago 1 reply      
It may have changed since the last time I used this, but here's how it works:

 1. There are two sides to a trade, Alice and Bob. 2. Alice has USD and Bob has Bitcoins. 3. Both sides wish to trade money but they don't trust each other. 4. To do this, they deposit collateral in the form of Bitcoins into a escrow account (multiple mediators need to sign to give back the collateral to their owners.) This is a bond separate from the money they are already trading. 5. Alice sends her USD to Bob. 6. Bob sends his Bitcoins to Alice. 7. If either side cheated the mediators won't sign the "check" to release funds from the Escrow account. Therefore, so long as the value of the collateral is worth more than their potential profit from scamming -- there is no incentive to scam.
In BitSquare step 4 I think is done with third-party mediators and the mediators make decisions based on evidence. So first, how do you prove that a user sent Bitcoin: easy, its on the blockchain. Second, how do you prove that a user sent USD? Well, I believe BitSquare uses something called "TLS Notaries" -- this allows a person to cryptographically prove that an SSL website was in their browser, potentially enabling a person to prove that they sent funds.

As you can see this scheme has a few problems:

 1. Users are required to have Bitcoins for collateral. So if you don't already have Bitcoins you can't buy any (strange scenario.) 2. It relies on collateral, period, so you can never buy and sell the full amount of funds that you have. 3. Liquidity is poor. BitSquare could be improved if they had more investors and structured the exchange to provide liquidity themselves at a premium. 4. It's unclear how secure the notaries are and whether or not it can be cheated. 5. Reputation isn't that secure and the model doesn't account for attackers, though I think BitSquare solves this with multiple mediators.
Another option to solve the same problem is to use micro-payment channels. A service would have credit that represented a USD balance and micro-amounts of this balance would be sent to the recipient as the sender receives micro-amounts of Bitcoin. This is a better model, IMO, but still can potentially be reversed.

It's good to see that BitSquare is still around though. Decentralized exchanges haven't had much adoption so far and I haven't seen anyone who nailed every usability problem that these exchanges have. Even assets on Ethereum where you can literally write simple code that says "a transfer occurs if two users agree to it" are traded on "decentralized exchanges" with multiple vulnerabilities and bad UX for traders.

7
Animats 2 days ago 2 replies      
The fundamental problem with this is the same problem Ethereum has - the hooks to the real world aren't very good. Making them better implies trusting a third party. If you have to trust a third party, why do you need this?
8
dharma1 2 days ago 1 reply      
Coould you have a decentralised p2p fiat <-> ETH exchange with a smart contract and PayPal, where the sold ETH is being held on the smart contract until PayPal transfer from buyer to seller is verified by the smart contract?

Like localbitcoin but no need to meet up

9
mattbeckman 2 days ago 1 reply      
How does this differ from BitShares that's been around a long time? https://bitshares.org/
10
cupcakestand 2 days ago 3 replies      
Ultra fast trading in millisenconds + real-time updates of all orders are the key requirements for any exchange.

Not sure how a decentralized exhange could manage this.

11
equalunique 2 days ago 1 reply      
Interesting. Previously, the only incarnation of this idea was, to my knowledge, NVO: https://nvo.io/
12
kinnth 2 days ago 0 replies      
I get it. I like it, but I wouldn't use it just yet. I think the fees charged by most exchanges are low enough for me not to worry. Also there is a large amount of legal bureaucracy now happening over real money exchanges, how might that affect it?
13
gragas 2 days ago 1 reply      
This is going to get so destroyed but HFT strategies. A million already come to mind.
14
cbeams 1 day ago 0 replies      
Bisq team member here, thanks everyone for the feedback so far. Happy to field further questions. Ask us anything...
15
max_ 2 days ago 1 reply      
A better version already exists https://etherdelta.github.io
16
em3rgent0rdr 2 days ago 0 replies      
I feel there is already a decentralized exchange: openbazaar. That has the advantage of a decentralized reputation system and a head start in network effects.
17
brian_herman 2 days ago 1 reply      
Who enforces it?
18
cocktailpeanuts 2 days ago 5 replies      
Just finished watching their video and got nothing other than "Read our white paper if you're interested", plus bunch of buzzwords.

Why make the video at all? If I were them I would scrap all the bullshit and just spend the two minutes in the video explaining the basics of WHY and HOW it works. People visiting that site already know what a "decentralized bitcoin exchange" is.

23
Open Container Initiative specifications are 1.0 coreos.com
208 points by philips  1 day ago   43 comments top 9
1
cyphar 20 hours ago 0 replies      
Huge props to the other maintainers of the various OCI projects, the contributors to the OCI, and the wider community (special shout-outs to the AppC folks). It's been a very long time coming, but finally we pulled through and have hit this milestone.

I'm very excited for all of the ideas we want to work on now that this milestone is out of the way, and we have a solid base to improve upon. But of course we should have a moment's rest to appreciate how far we've come.

Also, here's the official press release from LF: https://www.opencontainers.org/blog/2017/07/19/oci-v1-0-brin...

2
yoshuaw 20 hours ago 0 replies      
This is really cool - the Open Container bundle format is quite easy to grok; it's a tarball* containing a config.json file for metadata, and a root filesystem. See: https://github.com/opencontainers/runtime-spec/blob/master/b...

I'm really digging the end result of the spec - it mostly defines a bunch of conventions on top of easy formats. Kind of wish the marketing around it would be less buzz wordy, because the end result is quite straight forward.

* (It's usually a tarball; nothing stopping you from sharing images over torrent for example.)

3
chubot 22 hours ago 7 replies      
Does anyone have experience with non-Docker OCI implementations to share? What's an example of something that uses OCI but not Docker? The blog post is a little vague.

I applaud this effort because Docker is sloppy, but I also realize that this isn't usually how technological adoption occurs. Usually the sloppy implementation is standardized, e.g. JavaScript, POSIX, CSV, Markdown, Ruby and Python to some extent -- rather than something designed from scratch or designed with fewer implementation artifacts.

Maybe C++ is an exception, where things are actually standardized and there are multiple, healthy, competing implementations.

4
nwmcsween 2 hours ago 0 replies      
I think a better but os specific alternative would be to simply have an elf section in a binary per 'container' technology (mounts, unshare, seccomp, etc) that is simply a called at the earliest .ctor. State would be per binary and could potentially be per application as well in say /state/$prog/$app.

The upsides of this is simplicity, permanent 'containerization', better handling of state, automatic updates if all this is handled with a package manager and finally the potential to have more locked down containers via analyzing the dep chain.

5
clhodapp 17 hours ago 3 replies      
It is sad how much waste will occur because OCI used Docker's flat layer list instead of appc's dependency graph. This will lead to a tremendous amount of unnecessary network traffic and disk usage globally. I think this is a Betamax vs VHS moment on a much smaller scale.
6
neom 20 hours ago 2 replies      
Been a hot minute since I've sat and thought about cloud. Curious how containers fit into the "serverless" (lol) movement? Also, does something like Invision straight to lambda poses an existential threat?
7
zzzcpan 21 hours ago 1 reply      
Is there an rfc-like specification document somewhere?
8
agumonkey 19 hours ago 0 replies      
ISO OSI OCI I see a pattern
9
virgil_disgr4ce 22 hours ago 2 replies      
24
AMD has no plans to release PSP code reddit.com
239 points by rnhmjoj  1 day ago   136 comments top 28
1
sonium 1 day ago 2 replies      
For everybody wondering: PSP stands for Platform Security Processor, a secure enclave in the processor and AMD's version of the Intel Management Engine.

Quoting from Libreboot:

As such, it has the ability to hide its own program code, scratch RAM, and any data it may have taken and stored from the lesser-privileged x86 system RAM (kernel encryption keys, login data, browsing history, keystrokes, who knows!). To make matters worse, the PSP theoretically has access to the entire system memory space (AMD either will not or cannot deny this, and it would seem to be required to allow the DRM features to work as intended), which means that it has at minimum MMIO-based access to the network controllers and any other PCI/PCIe peripherals installed on the system.

2
Unklejoe 1 day ago 5 replies      
One thing that kind of bothers me is that I keep seeing people justify this decision by mentioning that AMD may be using third party source code which they do not have the license to release.

My issue with that is that I dont see how it prevents AMD from releasing a super stripped down (essentially disabled), but still closed source version of the firmware. There may very well be a valid justification for not doing so - I just dont know.

An even better approach would be to allow the system to run without the firmware at all. If I remember correctly, some Intel machines will run for 30 minutes without the firmware, but will shut down after the time elapses. If true, that at least proves that its not essential for system operation.

In the case of Intel, surely they could have disabled this 30 minute timer on the consumer CPUs (just like how they disable ECC support).

3
sspiff 1 day ago 2 replies      
I think this comment on the reddit thread is a pretty accurate take on the whole thing:

"Not that anyone could have seriously expected that.

If AMD makes a business decision to possibly open source the PSP now or in the near future, the first results will be visible 2-3 years later, at best. However, it is VERY likely that there are legal barriers, such as 3rd party code. Maybe they are using a 3rd party RTOS that they cannot publish the sources of? Or maybe some DRM part of the PSP firmware can't be published?

What I'd personally like to see is a minimal PSP implementation (without any noticeable features) that's Open Source, with reproducible build process and a binary of that signed by AMD."

(Source: https://www.reddit.com/r/Amd/comments/6o2e6t/amd_is_not_open...)

4
JepZ 1 day ago 0 replies      
The sad thing is, we can be sure, that AMD is aware of the topic at the C-level and doesn't seem to act in the interest of the customer anyway. Just remember the Reddit AMA earlier this year: https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...
5
simias 1 day ago 2 replies      
I'm always mildly surprised to find people posturing that having closed source modules in the CPU is completely intolerable while basically all modern hardware is closed source. Even if they release the source code for the PSP they won't release the verilog of the underlying IP to see how it's implemented. And you'd still run a closed source CPU, with a closed source GPU and a closed source USB controller and a closed source PCI controller etc etc...

How many of the people complaining about the lack of PSP source code are willingly running a closed source GPU driver? How many of them are running Windows, Mac OS or some other closed source operating system?

It reminds me of when people were blowing a fuse over the linux kernel's use of Intel's built-in PRNG.

I'm all for open hardware design but focusing on these particular modules is a bit counterproductive I think, if tomorrow AMD or Intel were to release CPUs without this particular "feature" the surface of attack would not shrink massively.

Hardware is open or it's not. Nowadays it's overwhelmingly not open, so you have to blindly trust AMD, Intel and the variour ARM SoC vendors not to fuck you over[1] and having the PSP source code wouldn't change much about that.

[1] They probably do.

6
wolfgke 1 day ago 0 replies      
Just for your information to help you do further research: AMD seems not to use the brand name "AMD PSP" anymore. Instead some years ago they began to use the name "AMD Secure Processor". Nevertheless it is just the same: Read the small footnore at http://www.amd.com/en-gb/innovations/software-technologies/s... which begins with 'AMD Secure Processor (formerly Platform Security Processor or PSP)'.
7
executesorder66 1 day ago 2 replies      
Guess I'll stick with Intel then.

It seems like many in this thread get stuck on the idea that Intel "already has the ME, so what's the point?"

The point is, I prefer Intel due to performance reasons. But I would have changed over to AMD permanently if they open sourced the PSP code, or if they removed the PSP entirely. That would have been their one competitive advantage, and now they've shat all over it, and lost many potential customers.

8
GuiA 1 day ago 1 reply      
Previous discussion here that gives more background about the topic:

https://news.ycombinator.com/item?id=13781408

tl;dr: the PSP is a full computer that lives between the instructions that get sent to your CPU and what your CPU actually executes. It's obviously a huge security problem, because if a backdoor in the PSP gets exploited, you cannot trust your computer in any way, and there is no way for you to verify whether it is being exploited or not. AMD said they might open source it, but even that wouldn't do much to establish trust.

9
floatboth 1 day ago 0 replies      
We don't need the fucking code, we need it to just be OPTIONAL. Just an option to FULLY DISABLE IT. Screw everyone who said "open source it" instead of "disable it" >_<
10
bo1024 1 day ago 2 replies      
Very disappointed to hear this. A different decision would have definitely swayed me to buy AMD for my next processor.
11
jancsika 1 day ago 1 reply      
Curious what giants like Google do in these cases. I assume in their vast farms of servers there are post-2006 intel and/or post-2013 amd chips.

Do they simply use the features these processors and their OSes provide? Do they get a special deal to look under the hood? Something else entirely?

12
kronos29296 1 day ago 1 reply      
Never heard about PSP before. Looks like another corner of the computer world in total lockdown and relies on security through obscurity. Not gonna work forever but keeps all but the talented people out. Problem is there are many talented people out there.
13
pcunite 1 day ago 1 reply      
What can we do about this? This is completely unacceptable! Note, I'm not referring to the source code, I could care less, I'm referring to removing this obvious breach of privacy that will be used by governments.
14
Entalpi 1 day ago 2 replies      
For the uninitiated; what good does this PSP to the system?
15
askz 1 day ago 2 replies      
Is there any good alternative to AMD or Intel processors?

EDIT:Is there any good alternative to AMD or Intel processors? Like more open ones?

16
pksadiq 1 day ago 1 reply      
Open sourcing the PSP code (or probably any other firmware) may not help protect you (or anyone) in terms of security or privacy.

Here is where open source is different from free software, and this is why free software is better than open source.

Say for example, ddwrt, a very popular firmware used in several routers. Or simply saying, Linux (kernel) + busybox, a very popular combination of some open source codes used in several embedded systems: They are simply open source when they run on those systems, not free software. You can't replace their code with some (custom) versions most of the time.

All you get is the source code, you may not be allowed to run custom version. You may not even be able to confirm if the code they gave is the code that is actually being run on those devices.

As always said, that is enough to fit to the terms of "open source". And yes, just another reason why free software is (almost always) better than open source.

17
vesak 23 hours ago 0 replies      
Excuse my French but what a fucking surprise. Who ever thought they would? That's right, I did.

Hence I'm a bit pissed off right now. But even though I was dumb enough to believe, at least I was smart enough to wait for it, and did not waste my money.

18
westmeal 1 day ago 1 reply      
Looks like high performance comes at a security cost. It's a good thing projects like RISC V and certain boards like the TALOS exist or else it'd be a hell of a time. Has anyone here tried some tech from SiFive? They appear to have RISC V boards for sale.
19
sliken 16 hours ago 0 replies      
Isn't this exactly what encrypted ram is for? Insecure DMA access to all of memory isn't such a big deal if the contents are encrypted.
20
logicchains 1 day ago 1 reply      
I wonder, is there some way mathematically that software could be written that's still secure (or unfeasibly difficult to crack) when running on a system with AMD's PSP or Intel's IME?
21
kennydude 1 day ago 0 replies      
Side tracking, but ARM really is everywhere now. Even inside x86 chips!
22
snakeanus 21 hours ago 0 replies      
If only there was a foss cpu that you could easily install on a fpga - that would be really great.
23
faragon 1 day ago 4 replies      
Are Ryzen and Threadripper affected? If so, I would change my opinion on AMD, and chose Intel instead for my next home computer.
24
api 1 day ago 0 replies      
I'd settle for the ability to verifiably disable the PSP.
25
throw2016 1 day ago 0 replies      
It's safe to assume hardware and software is completely and totally backdoored. It's done with NSLs and co-operation of companies or compromising employees, projects, standards and other tricks which are child's play for most government agencies.

That makes the idea of using technology to gain some sort of privacy from state level actors with infinite resources and man power to thwart you a total nonstarter that can at best deliver a false sense of security and at worst more serious consequences for those who may need it.

The bottom line is established power is paranoid and wants to monitor all communication whatever they may proclaim publicly and it appears from hind sight and what we now know they have always done it.

26
mtgx 1 day ago 0 replies      
Backdoor confirmed.
27
lost_name 1 day ago 1 reply      
Is there any reason why people were (seemingly) expecting this, or was it just an idea that gained momentum that AMD never actually suggested or considered?
28
jincheker 1 day ago 1 reply      
F*k noobs manipulated by these nonsense. Intel does the same thing, if you want freedom, build your own
25
Dice-O-Matic hopper and elevator (2009) gamesbyemail.com
241 points by pavel_lishin  19 hours ago   54 comments top 16
1
tlb 19 hours ago 3 replies      
When I play RISK with my kids, the time invested in each roll of the dice is proportional to what's at stake. During a critical close battle, they might shake, blow, and otherwise cajole the the dice for a minute before rolling. Playing on an iPad with its RNG loses all this, so it's very pleasing to think of playing online with real dice.
2
Animats 19 hours ago 4 replies      
Cute mechanism, but only running it part time and storing random numbers for later use is asking for a security breach. If you can find out what random numbers are coming up, you win.

Vibratory bowl feeders pretty solve the problem of getting simple objects lined up.[1] Any object that isn't lined up properly gets dropped off the ramps back into the bowl for another try.

[1] https://www.youtube.com/watch?v=Mejn0n4IslY

3
BatFastard 18 hours ago 3 replies      
A wonderful machine, I can imagine the sound of it two rooms away.

So now as an engineer and semi-pro backgammon player, the ultimate dice were ones that had the divots drill out and replaced with a different colored plastic. This way the uneven weight of the dice was not a factor. Just wondering...

4
samfriedman 19 hours ago 0 replies      
This is great. Reminds me of the Lego brick identifier/sorter that was posted a while ago. Sure, there's no real reason that true-random bits from Random.org couldn't be used, but I think a project like this is neat precisely because it takes such great strides to bring back a classic physical component of playing board games that the service is otherwise designed to eliminate.
5
saluki 12 hours ago 0 replies      
I've been playing on gamesbyemail.com for over 12 years. It's amazing what Scott implemented early on in the AJAX era. I mainly play WW2 the Axis & Allies game. Amazing job. Great fun.

He had a lego dice roller before this.

Amazing site.

Thanks for all the hard work on this.

6
falcolas 19 hours ago 2 replies      
Insufficient randomness from random.org?

Odd. Seems like the only way it wouldn't be random is if the code transforming the output into die rolls was wrong, or if the inherent unfairness of dice with pips carved out of them is truly desired (which, it seems, could still be modeled from a truly random source).

A cool machine though. I was expecting the captures to take place somewhere other than the lifting chain though, but it makes sense for the setup.

7
gburt 10 hours ago 0 replies      
How could rolling dice be "more random" by any meaningful definition of randomness than random.org? Rolling dice is a pretty trivial physics problem with really hard to observe parameters... parameters that the machine reduces the dimensionality of considerably.
8
w8rbt 17 hours ago 0 replies      
This is awesome. Seems you could make it a business and sell some random bits. Maybe compete with hotbits and random.org.
9
JoeDaDude 14 hours ago 0 replies      
If you need a really random number, NIST can provide one free of charge:

https://beacon.nist.gov/home

10
philippnagel 19 hours ago 0 replies      
The Internet of Things is amazing.
11
dang 17 hours ago 1 reply      
12
kevin_thibedeau 13 hours ago 0 replies      
It would be interesting to silently switch back to a PRNG for a few months and see if anyone notices.
13
theophrastus 18 hours ago 0 replies      
What an impressive device! Does it retain statistics keyed to a particular die? So that you could possible identify a 'loaded' cube which sneaked in?
14
gsdfg4gsdg 17 hours ago 0 replies      
It would be cool if the online game showed you the actual photo of your dice roll.
15
PhasmaFelis 19 hours ago 3 replies      
> I have used Math.random, Random.org and other sources, but have always received numerous complaints that the dice are not random enough. Some players have put more effort into statistical analysis of the rolls than they put into their doctoral dissertation.

I seriously doubt that high-quality electronic randomness is non-random enough to have a noticeable effect on the outcome of board games. It's nice that the guy is accommodating enough to go to all this effort, but it seems unnecessary. Cool project, though.

16
tofflos 19 hours ago 0 replies      
It's magnificent!
26
What Does It Take to Track a Million Cell Phones? thehftguy.com
241 points by siva7891  1 day ago   80 comments top 18
1
jandrewrogers 23 hours ago 3 replies      
I've designed systems that do this on continental scales (i.e. hundreds of millions of cell phones simultaneously, in real-time). The devil is in the details and non-trivial; this is not a "an intern and 6 months" job. Mobile telemetry is not nearly as ideal in practice as assumed here and it typically takes a couple years to learn how to handle the numerous peculiar artifacts of that data that will damage the quality of a naive implementation. Reconstructing a model of the population from the cleaned data that approximates the ground truthing is surprisingly difficult and requires quite a bit of clever data science and maths.

It takes a lot of work and expertise to build a population model from mobile telemetry that approximately reflects reality. Far fewer people know how to do this well than you might assume by looking at the requirements for a naive implementation. Even most mobile carriers have limited ability.

2
strictnein 1 day ago 2 replies      
This article finally answered a question I've had for a while: how they can do decent triangulation with just two towers.

> "We said that a tower covers a radius around it. In practice, this is sub optimal so thats not how its done.

> Instead, a station is usually split in 3 independent beams of 120 degrees."

So it's not the intersection of two circles anymore, it's the intersection of two arcs, which will likely only have one intersection point, unlike circles.

3
Rjevski 1 day ago 3 replies      
Note that a lot of the information from the BTS is already available to anyone who "asks nicely".

The mechanism that provides roaming is based on trust, so anyone connected to the SS7 network can query the location of any phone in the world and even intercept its calls. Just say to the home carrier "hey this phone is roaming on my network, would you be able to send me all of its calls and texts?".

4
warrenm 1 day ago 2 replies      
The phone companies already do this, more or less, as is shown in court cases where cell phone records are brought in as evidence

A decade ago that data was a little more iffy (i.e. it was more a good estimate (typically within half a mile or less) than a true location), but with a combination of more towers (and therefore more data points), the ubiquity of smartphones (which check in more often, are doing geolocation related things, etc), and better / more accessible/well-known analytics tools, is think even 6 months would be a generous time-frame

5
TACIXAT 1 hour ago 0 replies      
I was hoping there would be some information in here about what cell phones leak that a third party could pick up on. For example, tracking the mac address in beacon packets, or the cell frequency equivalent of that. Of course if you can hook into the base stations you can track them.
6
mikhailfranco 5 hours ago 0 replies      
To answer the 'Call for comment' about intersecting complex shapes... one simple, fast, general, approximate, discrete method is to use OpenGL to get your GPU to do it for you. Just render the shapes into an off-screen framebuffer, using appropriate logic ops or stencil planes, then read back the final buffer to get a bitmask of the possible positions. To reduce to one estimate of position, find the centroid of the largest contiguous pixel group (flood-fill different seed ids; histogram pixels; select region id with highest count).
7
contingencies 1 day ago 0 replies      
Q. What Does It Really Take to Track a Million Cell Phones?

A. Sell outsourced billing solutions to the mobile carrier. (See AMDOCS)

8
Cieplak 1 day ago 1 reply      
Please don't abuse this :)

https://github.com/ernw/ss7MAPer

9
frankydp 20 hours ago 0 replies      
Inrix, TOMTOM, and a couple other have been providing this data as a product for at least 2 decades. There was an early provider that lead the space, but the name of that company eludes me at the moment, may have been actually purchased by inrix.

Most of those companies focused on 10m+- resolution and focused on path data to build traffic speed data for local news companies.

Only cost a couple million bucks and an extensive partnership agreement to get into the space.

There is a lot of data washing in those agreements, mostly related to preventing reverse identification.

Airsage has taken it to the next level in the more recent past with GPS based anonymized data, but data with EXTENSIVE history. The Airsage product is zip code and smaller resolution and can provide months to years of location history of an anonymous cell phone id.

10
jakeogh 18 hours ago 0 replies      
Seems easy to mitigate with a tweak to the network connection order: https://news.ycombinator.com/item?id=10985599
11
harlanji 22 hours ago 1 reply      
I did the math a while back, don't have the notes at the moment, but scaling an AWS system I built enough to collect 600m points of data each minute and compute on data within 100ms and retain it for a few minutes would run a bit over $10k usd/mo to operate. I operated it at about 3m events/min with a good amount of compute per including ip to geo lookup... Zookeeper would be the only bottleneck in this case assuming good enough partitioning.
12
losteverything 1 day ago 3 replies      
If my phone is powered off can i be tracked?

What if i remove the battery?

13
sengork 14 hours ago 0 replies      
Who else noticed the Winamp icon at one of the diagrams?

https://thehftguy.files.wordpress.com/2017/07/tdoa.png?w=300...

14
liprais 23 hours ago 2 replies      
this method will only work with GSM network because1.GSM networks doesn't verify BTS2.GSM encrypt keys are cracked and all over the internet.Users of other kind of networks should not worry about this kind of hack.Actually here in China a fake BTS a.k.a can be easily purchased online.
15
eleitl 1 day ago 0 replies      
Now you know why my Nokia 3310 is switched off most of the time.
16
draw_down 21 hours ago 0 replies      
Nothing worthwhile ever takes an intern and six months. Ever.
17
devrandomguy 1 day ago 2 replies      
A: A deeply sociopathic mindset. See the requirements section for details.
18
trekking101 1 day ago 2 replies      
Somebody please explain this line from the post:

Radio waves travel at the speed of light 299 792 458 m/s.

27
How Google Wants to Rewire the Internet nextplatform.com
248 points by Katydid  2 days ago   99 comments top 8
1
komali2 2 days ago 8 replies      
I really wanted to understand this article. I tried wikipediaing and googling some of the things I didn't really get (Jupiter is a thing... ok and Andromeda.. riiiight). Then I got to the chart "Conceptually, here is how Espresso plugs into the Google networking stack:", which was totally unparseable by me. All the green things look the same, but one of them is the thing this article is about (Espresso, right?), and Google somehow is represented by a vague dark-grey blob... I just don't get it.

Can anybody help? Am I simply not technically competent enough to consume this article yet?

2
emersonrsantos 2 days ago 1 reply      
> But running a fast, efficient, hyperscale network for internal datacenters is not sufficient for a good user experience

It will never be sufficient. A good backbone infrastructure doesn't compensate for the fact that the majority of users don't have ISP choices especially for fast speed fixed/mobile networks.

3
deegles 2 days ago 5 replies      
"one out of four bytes that are delivered to end users across the Internet originate from Google"

Such a mind blowing statement. Wonder when (if) they'll hit one-in-three bytes.

4
eru 2 days ago 0 replies      
Somewhat related: Google's efforts to speed up TCP.

"BBR: Congestion-Based Congestion Control" http://queue.acm.org/detail.cfm?id=3022184

5
pavement 2 days ago 0 replies      
I guess this planetary naming convention is part of a tie in with the notable Pluto Switch:

https://www.wired.com/2013/03/big-switch-indigo-switch_light...

https://www.wired.com/2012/09/pluto-switch/

6
konpikwastaken 2 days ago 2 replies      
Can someone ELI5 the difference between this and https://azure.microsoft.com/en-us/services/expressroute/? Is the technology principle the same?
7
ComodoHacker 1 day ago 0 replies      
>Amin Vahdat: Yup, pretty much traffic directors. Absolutely.

This quote stands out for me and makes me uneasy.

8
zzzcpan 2 days ago 3 replies      
I don't know, feels like a massive waste of resources and if Google is doing it simply because it can. It's probably much cheaper for everyone else to handle latency/throughput problems on the client side and application level, sticking to all the traditional networking, but not relying on it for quality. Even in the web browser we already can send all kinds of asynchronous requests to multiple servers in multiple datacenters, choosing the fastest response and making all kinds of decisions to where to send requests dynamically in real time.

And while I agree about overcomplicated routers and box-centric thinking in computer networks, it's pretty much impossible to change things because of the monopolistic nature of the ISP industry. They are very far from competing on the levels of quality where SDN could matter.

28
Elevation Control redblobgames.com
229 points by dougb5  2 days ago   16 comments top 4
1
Flux159 2 days ago 1 reply      
I see blog posts from redblobgames posted sometimes and I really like how in-depth the explanations are. I remember reading about 2d height maps and map generation for a side project I was working on a while back and the explanations he gave for why he was using a particular algorithm were fantastic (in addition to the interactive demos):

http://www.redblobgames.com/maps/terrain-from-noise/

http://www-cs-students.stanford.edu/~amitp/game-programming/...

2
theandrewbailey 2 days ago 7 replies      
Unless it's modeling islands, I find most terrain generators unnatural, at least from above. I wonder how feasible (and realistic looking) it would be to build a terrain generator using a bunch of fluid simulation to model plate tectonics, weather, and erosion.
3
akx 2 days ago 0 replies      
I had kind of the inverse idea some time (years) back -- generating polygonal contour maps, and heightmaps out of them. See the demo here: http://akx.github.io/islands/
4
Animats 2 days ago 0 replies      
Terrain generation is usually fractal, and you can start from a plane, or a hand-created surface and let the fractal algorithm fill in detail. That's old. The cool new thing was seen in Moana, where the water surface generation allowed animator control with the water being a character.

That created an uncanny valley problem. When the water "goes character", it looks a bit strange, because it suddenly stops obeying physical laws. That's OK for a Disney princess story, but used with high rendering realism it looks wrong.

29
Multi-Task Learning in Atari Video Games with Emergent Tangled Program Graphs acm.org
209 points by sengork  2 days ago   44 comments top 11
1
bomdo 2 days ago 1 reply      
I was a little surprised at the headline, since I expected 'outperforms' to mean that it had better end-results, which is of course not the case. GP is just much faster due to it's relative simplicity and the results are close enough to those achieved with NN and deep learning.

> Finally, while generally matching the skill level ofcontrollers from neuro-evolution/deep learning, the genetic programming solutions evolved here are several orders of magnitude simpler, resulting in real-time operation at a fraction of the cost.

> Moreover, TPG solutions are particularly elegant,thus supporting real-time operation without specialized hardware

This is the key takeaway and yet another reminder to not make deep learning the hammer for all your fuzzy problems.

2
smdz 2 days ago 3 replies      
One of the huge benefits of GPs over NNs is the ease of reverse engineering a GP tree compared to NN models. Its not effortless however. Its just not mathematically complex like NNs i.e. a programmer who isn't a mathematician can analyze GPs with a lot of patience

EDIT: I have found GPs to be relatively slow-to-very-slow. But very likely that is because of the lack of interest and development compared to NNs

3
nivwusquorum 2 days ago 2 replies      
Those are really old results. They should compare to this one: https://arxiv.org/pdf/1511.06581.pdf
4
partycoder 2 days ago 0 replies      
The convenient thing about Atari games is that there is usually a numerical score that can be used as input for the fitness function.
5
cshenton 2 days ago 1 reply      
This is super cool, but it doesn't outperform deep learning based RL methods.

In fact, I'm not sure how much more compute efficient than something like A3C it would be. That can produce 4x the score of DQN in a comparable number of hours (and on a CPU).

6
gourou 2 days ago 3 replies      
Genetic Programming seems lightweight, what are some cool applications they have?
7
99mistakes 2 days ago 0 replies      
Slightly relevant, here's a state of the art drone AI built using genetic fuzzy systems: https://www.forbes.com/sites/jvchamary/2016/06/28/ai-drone/#...
8
gourou 2 days ago 3 replies      
What's a good starting point for someone interested in building game AI?
9
nocoder 2 days ago 2 replies      
This sounds interesting. I will like someone from the field of genetic programming on how this works and how it differs from current DL approaches.
10
jerianasmith 2 days ago 0 replies      
I like GP, but the problem is AST. These can get huge. But the only advantage is ease of reverse engineering
11
vivek1410 2 days ago 1 reply      
30
The future of deep learning keras.io
237 points by nicolrx  1 day ago   62 comments top 18
1
computerex 1 day ago 4 replies      
> In DeepMind's AlphaGo, for example, most of the "intelligence" on display is designed and hard-coded by expert programmers (e.g. Monte-Carlo tree search);

Not true. This paraphrases the original paper:https://www.tastehit.com/blog/google-deepmind-alphago-how-it...

> They tested their best-performing policy network against Pachi, the strongest open-source Go program, and which relies on 100,000 simulations of MCTS at each turn. AlphaGo's policy network won 85% of the games against Pachi! I find this result truly remarkable. A fast feed-forward architecture (a convolutional network) was able to outperform a system that relies extensively on search.

Also, this article reeked of AGI ideas. Deep learning isn't trying to solve AGI. Reasoning and abstraction and high level AGI concepts that I don't think apply to deep learning. I don't know the path to AGI but I don't think it'll be deep learning. I think it would have to be fundamentally different.

2
amelius 1 day ago 2 replies      
What about the future of jobs in the field of deep learning?

EDIT: I'm thinking deep learning will become much like web development is today. Everybody can do it, and only a few experts will work at the technological frontier and develop tools and libraries for everybody else to use.

Therefore, if one invests time in DL, then I suppose it better be a serious effort (at research level), rather than at the level of invoking a library, because soon everybody can do that.

3
randcraw 1 day ago 0 replies      
I enjoyed part 1 of Chollet's two articles today but am less fond of this one. It suggests that deep learning will expand from its present capabilities of recognizing patterns to one day master logical relations and employ a rich knowledge base of general facts, growing into a situated general problem solver that one day may equal or surpass human cognition. Maybe. But he then proposes that deep nets will rise to these heights of self-organization and purposefulness using one of the weakest and slowest forms of AI, namely evolutionary strategies?

I don't think so.

The many problems bedeviling the expansion of an AI's competence at one specific task into mastery of more general and more complex tasks are legend. Alas neither deep nets nor genetic algorithms have shown any way to address classic AGI roadblocks like: 1) the enormity of the possible solution space when synthesizing candidate solutions, and 2) the enormous number of training examples needed to learn the multitude of common sense facts common to all problem spaces, and 3) how to translate existing specific problem solutions into novel general ones. Wait, wait, there's more...

These roadblocks are common to all forms of AI. The prospect of replacing heuristic strategies with zero knowledge techniques (like GA trial and error) or curated knowledge bases with only example-based learning is unrealistic and infeasible. Likewise, the notion that a sufficient number of deep nets can span all the info and problem spaces that will be needed for AGI is quite implausible. While quite impressive at the lowest levels of AI (pattern matching), deep learning has yet to address intermediate and high level AI implementation challenges like these. Until it does, there's little reason to believe DL will be equally good at implementing executive cognitive functions.

Yes DeepMind solved Go using AlphaGo's deep nets (and monte carlo tree search). But 10 and 20 years before that IBM Watson solved Jeopardy and IBM Deep Blue solved chess. At the time, everyone was duly impressed. Yet today nobody is suggesting that the AI methods at the heart of those game solutions will one day pave the yellow brick road to AI Oz.

In another 10 years, I predict it's just as likely that AlphaGo's deep nets will be a bust as a boom, at least when it comes to building deep AI like HAL 9000.

4
therajiv 1 day ago 1 reply      
TLDR is that models will become more abstract (current pattern recognition will blend with formal reasoning and abstraction), modular (think transfer learning, but taken to its extreme - every trained model's learned representations should be applicable to other tasks), and automated (ML experts will spend less time in the repetitive training/optimization cycle, instead focusing more on how models apply to their specific domain).
5
toisanji 1 day ago 1 reply      
This is part 2 from the post yesterday:https://news.ycombinator.com/item?id=14790251

And the author posted a comment on hn:

"fchollet: Hardly a "made-up" conclusion -- just a teaser for the next post, which deals with how we can achieve "extreme generalization" via abstraction and reasoning, and how we can concretely implement those in machine learning models."

I like the ideas presented in the post, but its not concrete or new at all.Basically he writes "everything will get better".

I do agree with the point that we need to move away from strictly differential learning though. All deep learning problems only work on systems that have derivates so we can do backpropagation. I dont think the brain learns with backpropagation at all.

* AutoML, there are dozens of these type of systems already, he mentions one already in the post called HyperOpt. So we will continue to use this systems and they will get smarter? Many of these systems are basically grid search/brute force. Do you think the brain is doing brute force at all? We have to use these now because there are no universal correct hyperparameters for tuning these models. As long as we build AI models the way we do now, we will have to do this hyperparameter tuning. Yes, these will get better, again, nothing new here.

* He talks about reusable modules. Everyone in the deep learning community has been talking about this a lot, its called transfer learning and people are using it now, and working on making it better all the time. We currently have "model zoos" which are databases of pretrained models that you can use. If you want to see a great scifi short piece on what neural network mini programs could look like written by the head of computer vision at tesla, check out this post: http://karpathy.github.io/2015/11/14/ai/

6
nzonbi 1 day ago 2 replies      
Interesting article, in a difficult topic. Speculating about the future of deep learning. The author deserves recognition for writing about this. In my personal opinion, within the next 10 years, there will be systems exhibiting basic general intelligence behavior. I am currently doing early hobbist research on it, and I see it as feasible. These system will not be very powerful initially. They will exist and work in simpler simulated environments. Eventually we will be able to make these systems powerful enough to handle the real world. Although that will probably not be easy.

I somewhat disagree with the author. I don't think that deep learning systems of the future are going to generate "programs", composed of programming primitives. In my speculative view, the key for general intelligence is not very far from our current knowledge. Deep learning, as currently we have, is a good enough basic tool. There are no magic improvements to the current deep learning algorithms, hidden around the corner. Rather what I think will enable general intelligence, is assembling systems of deep learning networks in the right setup. Some of the structure of these systems will be similar to traditional programs. But the models they generate will not resemble computer programs. They will be more like data graphs.

I expect within 10 years there will be computer agents capable of communicating in simplified, but functional languages. Full human language capability will come after that. And within 20 years I expect artificial general intelligence to exist. At least in a basic form. That is my personal view. I am currently working on this.

7
jdonaldson 1 day ago 1 reply      
Glad to see Deep Learning "coming down to earth". This is the first high profile post I've seen that spells out exactly how DL models will become reconfigurable, purpose-built tools, and what a workflow might look like. We're still a long way aways from treating them like software components.
8
kirillkh 1 day ago 3 replies      
Seeing how gradient descent is such a pinnacle of deep learning, I can't help wondering: is this how our brain learns? If not, then what prevents us from implementing deep learning the same way?
9
primaryobjects 1 day ago 0 replies      
Here are the results of my research into program synthesis using genetic algorithms.

Using Artificial Intelligence to Write Self-Modifying/Improving Programs

http://www.primaryobjects.com/2013/01/27/using-artificial-in...

There is always a research paper, if you prefer the sciency format.

BF-Programmer: A Counterintuitive Approach to Autonomously Building Simplistic Programs Using Genetic Algorithms

http://www.primaryobjects.com/bf-programmer-2017.pdf

10
guicho271828 1 day ago 0 replies      
Regarding logic and DL, there is NeSy workshop in Londonhttp://neural-symbolic.org/
11
crypticlizard 1 day ago 1 reply      
Are there popular modern libraries that do program synthesis? Although I've thought about this and read about the concept on hn, I've not heard it mentioned seriously or frequently or strenuously as a thing to do either just for fun or to get a job doing it. This could be a popular way to solve programming problems without needing programmers. I think this truly would kick off AI as a very personal experience for the masses because they would use AI basically like they do already do now with a search engine. People would use a virtual editor to design their software using off the shelf parts freely available. The level of program complexity could really skyrocket as people now have more control over what and how they run programs because they can easily design it themselves. Everyone could design their own personal Facebook or Twitter and probably a whole new series of websites too complex or for other reasons not invented yet.

For instance, you want to program the personality of a toy, so you search around using the AI search engine for parts that might work. Or you want a relationship advice coach so you put it together using personalities you like, taking only the parts you want from each personality. Or another example would be just to make remixes of media you like. Because everything works without programming anyone can participate.

12
lopatin 1 day ago 1 reply      
I'm also interested to see how the worlds of program synthesis (specifically type directed, proof-checking, dependently typed stuff) can combine with deep learning. If recent neural nets have such great results on large amounts of unstructured data, imagine what they can do with a type lattice.
13
Kunix 1 day ago 0 replies      
About libraries of models, it would be useful to have open source pre-trained models which can be augmented through github-like push requests of training data together with label sets.

It would allow to maintain versioned versions of always improving models everyone can update with a `npm update`, `git pull` or equivalent.

14
ipunchghosts 1 day ago 0 replies      
Great work! Glad someone can finally explain this to the masses in an easy to understand way. Looking forward to the future!
15
scientist 1 day ago 0 replies      
Self-driving cars are expected to take over the roads, however no programmer is able to write code that does this directly, without machine learning. However, programmers have built all kinds of software of great value, from operating systems to databases, desktop software and so on. Much of this software is open source and artificial systems can learn from it. Therefore, it could well be that, in the end, it would be easier to build artificial systems that learn to automatically develop such software than systems that autonomously drive cars, if the right methodologies are used. The author is right to say that neural program synthesis is the next big thing, and this also motivated me to switch my research to this field. If you have a PhD and are interested in working in neural program synthesis, please check out these available positions: http://rist.ro/job-a3
16
amelius 1 day ago 0 replies      
I'm wondering if we will ever figure out how nature performs the equivalent of backpropagation, and if that will change how we work with artificial neural networks.
17
nextstar 1 day ago 0 replies      
I'm excited for the easy to use tools that have to be coming out relatively soon. There are a lot right now, but the few I've used weren't super intuitive like I feel like they could be.
18
MR4D 1 day ago 1 reply      
Compression.

That one word disrupts his whole point of view. This idea that we need orders and orders of magnitude more data seems insane. What we need is to figure out how to be more effective with each layer of data, and be able to have compression between the tensor layers.

The brain does a great job of throwing away information, and yet we can reconstruct pretty detailed memories. Somehow I find it hard to believe that all of that data is orders of magnitude above where we are today. Much more efficient, yes. And that's through compression.

       cached 20 July 2017 15:11:01 GMT