hacker news with inline top comments    .. more ..    9 Jul 2017 News
home   ask   best   3 weeks ago   
BayesDB: A probabilistic programming platform mit.edu
69 points by relyio  4 hours ago   1 comment top
kensai 2 hours ago 0 replies      
Is there a comparison of its accuracy against traditional methods? Admittedly, this machine assisted modeling sounds really interesting.
Verdi Formally Verifying Distributed Systems uwplse.org
97 points by mindcrime  6 hours ago   5 comments top 4
relyio 5 hours ago 0 replies      
Remember, two years ago James Wilcox and Doug Woos were formally proving Raft's linearizable semantics using Verdi: https://news.ycombinator.com/item?id=10017549
im_down_w_otp 2 hours ago 1 reply      
Currently stuck between a rock and a hard place with things like this.

Many of the other formal verification tools make it very easy to have your implementation drift or be entirely unmoored from your specification, but they let you keep working at a level of abstraction from your problem that's still very familiar. Though things like SMACK and DiVinE are helping to decrease the gap between spec and code.

Using things like Coq + program extraction brings the overlap between spec and implementation into much, much closer alignment, but brings with it additional problems. Writing a complex program in a very abstract language further away from an engineer's typical problem domain, being fairly limited in types of languages supported for extraction, and/or still having to have an awful lot of faith in the extractor (which itself is unverified as near as I can tell) are all things that are currently keeping me out of Coq for immediate use cases.

The good news though is that there's a lot of fairly high-profile work being done (like Verdi) to increasingly bring formal methods to increasingly complex software problems in ways that make using formal methods more approachable and usable, and that's truly wonderful.

palmskog 4 hours ago 0 replies      
Some new developments of the framework in the last two years:

- packaging via OPAM to separate framework code and system code

- support for specifying and proving liveness properties

- support for proving Abadi/Lamport style refinement mappings between systems

The following workshop paper gives an overview of current and upcoming work:http://conf.researchr.org/event/CoqPL-2017/main-verification...

nickpsecurity 5 hours ago 0 replies      
Beipanjiang Bridge, suspended 565m above Chinas south-west mountains [video] bbc.com
46 points by mudil  4 hours ago   14 comments top 6
iUsedToCode 1 hour ago 1 reply      
I like how the reporter speaks Chinese. Usually it's english only, which i suppose is translated off screen for the local being interviewed.

I don't watch much BBC so i don't know if that changed lately. It's a change for the better imo.

moontear 12 minutes ago 0 replies      
Really impressive bridge to drive over. Our bus driver (regular travel bus) stopped on the bridge so people could take pictures. Nobody could tell us why he stopped on the bridge. We figured only afterwards that it is the worlds highest bridge we drove on.
prawn 2 hours ago 0 replies      
Seriously impressive bridge.

(It's a video a few minutes long and worth watching. I rarely watch videos on news sites, but glad I watched this one.)

zokier 20 minutes ago 1 reply      
There is very little provided for the reason for building the bridge. Doing a massive construction project so that some potato farmer can carry his goods to neighboring village seems unwarranted. Is there some industry or something that can use the bridge, or maybe it has some strategic value?
NaliSauce 1 hour ago 0 replies      
I find it fascinating that (pressure) grouting is shown as something revolutionary and new here.
markdown 2 hours ago 1 reply      

That's a beautiful bridge though.

Minimal PDF brendanzagaeski.appspot.com
165 points by ingve  10 hours ago   52 comments top 12
aidos 25 minutes ago 0 replies      
I've spent much of the last year down in the internals of pdfs. I recommend looking inside a PDF to see what's going on. PDF gets a hard time but once you've figured out the basics, it's actually pretty readable.

Some top tips; if you decompress the streams first, you'll get something you can read and edit with a text editor

 mutool clean -d -i in.pdf out.pdf
If you hand mess with the PDF, you can run it through mutool again to fix up the object positions.

Text isn't flowed / layed out like HTML. Every glyph is more or less manually positioned.

Text is generally done with subset fonts. As a result characters end up being mapped to \1, \2 etc. So you can't normally just search for strings but you can often - though not always easily find the characters from the Unicode map.

j_s 8 hours ago 2 replies      
See also on the same site: Hand-coded PDF tutorial | https://brendanzagaeski.appspot.com/0005.html

If you need more, the "free" (trade for your email) e-book from Syncfusion PDF Succinctly demonstrates manipulation barely one level of abstraction higher (not calculating any offsets manually): https://www.syncfusion.com/resources/techportal/details/eboo...

"With the help of a utility program called pdftk[1] from PDF Labs, well build a PDF document from scratch, learning how to position elements, select fonts, draw vector graphics, and create interactive tables of contents along the way."

[1] https://www.pdflabs.com/tools/pdftk-server/

ekr 1 hour ago 0 replies      
See also klange's resume: https://github.com/klange/resume. Resume pdf that's also a valid ISO 9660, bootable toaru OS image.
eru 19 minutes ago 0 replies      
> Most PDF files do not look readable in a text editor. Compression, encryption, and embedded images are largely to blame. After removing these three components, one can more easily see that PDF is a human-readable document description language.

Of course, PDF is intentionally so weird: it was a move by Adobe because other companies were getting too good at handling postscript.

Embedding custom compression inside your format is seldom worth it: .ps.gz is usually smaller than pdf.

tptacek 8 hours ago 3 replies      
The biggest complexity (and security) problem with PDF is that it's also effectively an archive format, in which more or less every display file format conceived of before ~2007 can be embedded.
Noctem 1 hour ago 0 replies      
This page was helpful to me a couple years ago while crafting the tiny PDF used for testing in Homebrew. https://github.com/Homebrew/legacy-homebrew/pull/36606
kazinator 6 hours ago 0 replies      
Plain text ... but with hard offsets ... encoded as decimal integers. Yikes!
fizixer 8 hours ago 4 replies      
This is good but Postscript is even better. Someday I'll learn it and see what I can do with it.
amenghra 2 hours ago 0 replies      
If you like this you might enjoy this repo: https://github.com/mathiasbynens/small
mp3geek 6 hours ago 2 replies      
Would be nice if browsers would support saving pages directly as pdf using there own pdf librarys.
ilaksh 5 hours ago 1 reply      
PDF is literally the worst possible format for document exchange because it has the most unnecessary complexity of all document formats, which makes it the hardest to access. But popularity and merit are two totally different things.
jimjimjim 1 hour ago 0 replies      
PDF is like c++

it's used everywhere because you can do everything with it.

This also leads to the problem where you can do anything with it.

so each industry is kind of coming up with their own subset of pdf that applies some restrictions in the hopes of making them verifiable.

the downside is that these subsets slowly start bloating until they allow everything anyway.

i'm looking at you PDFa. grr.

Benchmarking TensorFlow on Cloud CPUs: Cheaper Deep Learning Than Cloud GPUs minimaxir.com
164 points by myth_drannon  10 hours ago   73 comments top 15
paulsutter 10 hours ago 8 replies      
Shoutout for Hetzner's 99 euro/month server with a GTX 1080, much better than the pseudo-K80s that Google Cloud provides for $520/month. The Google K80s are half or quarter the speed of a real K80, part of the reason they show so badly in the comparison.


boulos 4 hours ago 1 reply      
Disclosure: I work on Google Cloud (and launched Preemptible VMs).

Thanks for the write-up, Max! I want to clarify something though: how do you handle and account for preemption? As we document online we've oscillated between 5 and 15% preemption rates (on average, varying from zone to zone and day to day) but those are also going to be higher for the largest instances (like highcpu-64). But if you need training longer than our 24-hour limit, or you're getting preempted too much, that's a real drawback (Note: I'm all for using preemptible for development and/or all batch-ey things but only if you're ready for the trade-off).

While we don't support preemptible with GPUs yet, it's mostly because the team wanted to see some usage history. We didn't launch Preemptible until about 18 months after GCE itself went GA, and even then it involved a lot of handwringing over cannibalization and economics. We've looked at it on and off, but the first priority for the team is to get K80s to General Availability.

Again, Disclosure: I work on Google Cloud (and love when people love preemptible).

0xbear 9 hours ago 4 replies      
FYI, y'all: cloud "cores" are actually hyperthreads. Cloud GPUs are single dies on multi-die card. If you use GPUs 24x7, just buy a few 1080 Ti cards and forego the cloud entirely. If you must use TF in cloud with CPU, compile it yourself with AVX2 and FMA support. Stock TF is compiled for the lowest common denominator.
dkobran 4 hours ago 1 reply      
One of the interesting variables in calculating ML training costs is developer time. The cost of a Data Scientist (or similar role) on an hourly basis will far outweigh the most expensive compute resource by several orders of magnitude. When you factor in time, the GPU immediately becomes more attractive. Other industries with heavy/time consuming computational workloads like CGI rendering have understood this for decades. It's difficult to attach a dollar sign to the value of speeding something up because it's not only about simply saving time itself but also about the way we work: Waiting around for results limits our ability to work iteratively, scheduling jobs becomes a project of its own, the process becomes less predictable etc.

Disclaimer: Paperspace team.

shusson 9 hours ago 1 reply      
While the authors article is relevant if you are stuck on GCP, on AWS you will not have the same conclusion. This is because AWS has GPU spot instances (P2) which can be found for ~80% cheaper depending your region [1]. Hopefully one day soon GCP will support preemptible GPU instances.

[1] https://aws.amazon.com/ec2/spot/pricing/

joeblau 9 hours ago 0 replies      
I would love to see these results put up against Google's new TPUs[1]. While TPUs are still in Alpha, my guess is that customized hardware that understands TensorFlow's APIs would be a lot more cost effective.

[1] - https://cloud.google.com/tpu/

cobookman 9 hours ago 2 replies      
I've been amazed that more people don't make use of googles preemtibles. Not only are they great for background batch compute. You can also use them for cutting your stateless webserver compute costs down. I've seen some people use k8s with a cluster of preemtibles and non preemtibles.
visarga 5 hours ago 1 reply      
For research and experimentation what you need is your own DL box. It will pay for itself in a few months. You will feel better having your own reliable hardware that you don't share or have to pay by the minute, and that will impact the kind of ideas you are going to try.

Then you scale up to the cloud to do hyperparameter search.

AndrewKemendo 9 hours ago 0 replies      
Excellent write-up, kudos on going through all of that Max. Too bad Google will deprecate the preemptable instances as a result :P.

There is a notable CPU-specific TensorFlow behavior; if you install from pip (as the official instructions and tutorials recommend) and begin training a model in TensorFlow, youll see these warnings in the console:

FWIW I get the console warnings with the Tensorflow-GPU installation from pip, and I verified that it was actually using the GPU.

data4science 6 hours ago 1 reply      
Paperspace has dedicated GPU instances for $0.40/hr, I'll have to compare with Hetzner...
jeremynixon 10 hours ago 0 replies      
Fascinating. Wish he could have shown benchmarks on a larger image database (Imagenet or CFAIR-100), as mnist is extremely easy to train on. Great to know, especially the LSTM benchmarking.
chris_st 7 hours ago 3 replies      
Quick question to those with a deep understanding of these things... I have not been able to get GPU tensor flow (on AWS) to run faster for the networks I'm using.

This is with a small(ish) network of perhaps a few hundred nodes... should I see a speedup for this case, or are GPUs only relevant for large CNNs, etc.?

automatapr 9 hours ago 2 replies      
Neat article. I think it's worth pointing out that this guy is an active commenter in the Hackathon Hackers facebook group, if you want to see more of his content. He can be pretty pretentious sometimes, but good content nonetheless.
vzn 6 hours ago 0 replies      
would be interesting to see benefit of MKL optimizations on the same examples


yahyaheee 10 hours ago 2 replies      
No spot instances?
Show HN: Crowdsourced visualization of neighborhoods in cities hoodmaps.com
37 points by pieterhg  1 hour ago   7 comments top 4
jjallen 1 hour ago 0 replies      
It's somewhat currently useful with how broad it is, but would prefer even more specifics.

Like in Chiang Mai where the street food is just to the east of the square wall area, I wanted to mark that as having street food, because that would be useful to the inherent tourist site visitor, but there's just a huge "tourist" zone and this can't be done. "Tourist" is only so helpful to tourists :)

zurfyx 58 minutes ago 2 replies      
For those who want to see how it was built, he still has a few videos about the early stages of its development on his YouTube:





thangngoc89 1 hour ago 0 replies      
I have been followed the development process of hoodmaps on Twitter since the beginning. I must say impressive
alexchamberlain 57 minutes ago 0 replies      
This is cool, but I feel areas overlap significantly. For example, I live on the cusp between a rich and a tourist area in London; in reality, the area used to be social housing, so is more affordable than the surrounding area. Furthermore, pretty much all the tourist areas of London have other use cases, whether residential or office buildings. The suburbs will also have light industry too.
A massive volcano that scientists can't find bbc.com
62 points by dimitrov  10 hours ago   18 comments top 5
Pitarou 3 hours ago 2 replies      
> they estimated that Kuwaes eruption had released vast quantities of magma, enough to fill the Empire State Building 37 million times over

This is a unit of journalistic measurement I have never come across before.

agucho 17 minutes ago 1 reply      
Mighty interesting, sure. Still, I can't stand these awe-driven narratives. The formula is getting old and in the jumping from one wow to the next a lot of the real questions and facts are left half-explained. And as somebody already pointed out, the units... football fields, Hiroshima bombs and the area of California... they should standardize those already, right? Journalists could use abbreviations and the rest of us would get to write unit converters when learning a new programming language.
mrb 3 hours ago 2 replies      
They say it could have been caused by an asteroid, but don't explain why this theory is seemingly discarded.

Edit: 2min of googling and I find a not very well know 20km impact crater dated at exactly the right time (mid 15th century): https://en.m.wikipedia.org/wiki/Mahuika_crater

DuskStar 3 hours ago 0 replies      
Spoilers: The volcano is unlikely to still be massive. Probably a good sized crater or two though.

(Ice core evidence shows a pair of eruptions around the 1460s, likely the cause of major famines across the planet in following years. The locations of the eruptions are still unknown, though.)

jsmthrowaway 3 hours ago 1 reply      
Imagine observing these events and trying to explain them with rudimentary scientific understanding. At the time, the best explanations for volcanoes involved wind causing friction in narrow canyons or subterranean rivers of fire, but nobody really knew. One can appreciate mythology more in this light, given the titanic scale of volcanic eruptions and an entire species agape in wonder.

Cool stuff. Really drives home that we are guests of this planet.

Cameras are about to get a lot smaller economist.com
65 points by spuz  10 hours ago   15 comments top 3
wallflower 2 hours ago 4 replies      
> He was holding a small device in his hand, the size and shape of a lollipop.

"This is a video camera, and this is the precise model that's getting this incredible image quality. Image quality that holds up to this kind of magnification. So that's the first great thing. We can now get high-def-quality resolution in a camera the size of a thumb."


"But for now, let's go back to the places in the world where we most need transparency and so rarely have it. Here's a medley of locations around the world where we've placed cameras. Now imagine the impact these cameras would have had in the past, and will have in the future, if similar events transpire. Here's fifty cameras in Tiananmen Square."


"There needs to be accountability. Tyrants can no longer hide. There needs to be, and will be, documentation and accountability, and we need to bear witness."



-From "The Circle" by David Eggers

kogepathic 20 minutes ago 0 replies      
> Cameras are about to get a lot smaller

No, scientists have developed a prototype which can take fuzzy photos of barcodes.

They then go on to tell you what would be necessary to have their device equal a present day sensor in a phone, but they haven't made one yet.

In fact, no estimate is given for when this technology might be competitive with CMOS sensors. The article just points to his previous work as proof he can get some of his ideas to market.

Relevant XKCD: https://www.xkcd.com/678/

I am excited by advances in camera technology, but this headline is peddling research as a pending disruption to the industry, and I don't see any evidence of that in the article.

bwang29 2 hours ago 1 reply      
It seems like economist uses Javascript to create the "You've reached your article limit" dialogue. Simple press ESC key to stop JS from executing on Chrome so that you can read the article.

Also here is my TL:DR summary of it if you're still trying to fight through the pay wall:

There is a thing called grating coupler that works like little high frequency antennas that receives light signals. When you put a whole array of them you will be able to do various scans of light signals to simulate the camera pointing at different direction, or fisheye, telephoto effects without the need of tilting or moving the surface of the array. The underlying computation relies on the ability to calculate and control the timing of signal travelled from each antennas, plus some classic signal interference and phasing issues. An 1cm x 1cm array will contain 1 million such couplers which would create a similar sized image as an iPhone 7 rear camera, but since there is no lens involved, the camera can be made a lot thinner.

API Security Checklist for developers github.com
234 points by eslamsalem  12 hours ago   51 comments top 11
tptacek 11 hours ago 7 replies      
There's some OK stuff here, but the list on the whole isn't very coherent. If this is a guide specifically for "APIs" that are driven almost entirely from browser Javascript SPA's, it makes sense. Otherwise, a lot of these recommendations are a little weak; for instance, most of the HTTP option headers this list recommends won't be honored by typical HTTP clients.

Further, the list succumbs to the cardinal sin of software security advice: "validate input so you don't have X, Y, and Z vulnerabilities". Simply describing X, Y, and Z vulnerabilities provides the same level of advice for developers (that is to say: not much). What developers really need is advice about how to structure their programs to foreclose on the possibility of having those bugs. For instance: rather than sprinkling authentication checks on every endpoint, have the handlers of all endpoints inherit from a base class that performs the check automatically. Stuff like that.

Finally: don't use JWT. JWT terrifies me, and it terrifies all the crypto engineers I know. As a security standard, it is a series of own-goals foreseeable even 10 years ago based on the history of crypto standard vulnerabilities. Almost every application I've seen that uses JWT would be better off with simple bearer tokens.

JWT might be the one case in all of practical computing where you might be better off rolling your own crypto token standard than adopting the existing standard.

Kiro 10 hours ago 1 reply      
> Don't use Basic Auth

Why not? If it's an API meant to be consumed by a server I don't see what the problem is.

tiffanyh 11 hours ago 0 replies      
I don't bookmark many links but here's [1] a good one for all to keep on a similar topic.

It's a SO article on security for web transactions.

[1] https://stackoverflow.com/questions/549/the-definitive-guide...

bodhi 6 hours ago 1 reply      
What are peoples thoughts on using TLS client certificates for authentication?

Given we're talking about APIs, we avoid many of the UX problems, but it feels like taking on a different set of problems than just using a bearer token. It does provide baked in solutions for things like revocation and expiry though.

moxious 11 hours ago 2 replies      
No amount of checklisting and best practices substitutes for hiring someone smart to break your stuff and tell you how they did it. You can check all the boxes and still get pwned.

You can learn and run automated tools for 6 months and end up knowing 1/3rd of what a great pentester knows.

If you want to know you can resist an attack from an adversary, you need an adversary. If you want to know that you followed best practices so as to achieve CYA when something bad happens, that's a different story.

But honestly the security picture is so depressing. Most people are saved only because they don't have an active or competent adversary. The defender must get 1,000 things right, the attacker only needs you to mess up one thing.

And then, even when the defender gets everything right, a user inside the organization clicks a bad PDF and now your API is taking fully authenticated requests from an attacker. Good luck with that.

Security, what a situation.

philip1209 11 hours ago 5 replies      
> User own resource id should be avoided. Use /me/orders instead of /user/654321/orders

Can somebody explain this?

ikeboy 9 hours ago 0 replies      
So I'm developing a simple SAAS with little to no private info and where failure isn't critical.

For initial release I build a page that uses html buttons and basic javascript to GET pages, passes a key as a parameter, and uses web.py on the backend.

It seems like it would be a lot of work to implement the suggestions here. At what point does it make sense?

baybal2 25 minutes ago 0 replies      
The guy forgets the main thing here: length, type and range checks!

I'm finding issues like API servers hanging/crashing due to overly long or malformed headers all the time when I work on front-end projects.

Programming in a language with automatic range and type checks does not mean that you can forego vigilance even with the most mundane overflow scenarios: lots of stuff is being handled outside of the "safe" realm or by outside libraries.

ViktorasM 2 hours ago 0 replies      
Not a security topic, but POST is not necessarily "create" and PUT is not necessarily "update".
kpcyrd 9 hours ago 0 replies      
I've filed a pull request to include a CSP technique I've started adding on some of my apis:


EGreg 7 hours ago 1 reply      
There is a lot more you can do.

For example you can sign session IDs or API tokens when you issue them. That way you can check them and refuse requests that present invalid tokens without doing any I/O.

SQL in CockroachDB: Mapping Table Data to Key-Value Storage (2015) cockroachlabs.com
26 points by yinso  4 hours ago   9 comments top 2
ForHackernews 1 hour ago 1 reply      
[Official containment subthread for complaining about the name "CockroachDB"]

Under the terms of the Unified HN Convention, agreed 2015, every thread about CockroachDB must by law contain a series of complaints about the name of the database. Please post yours below.

To help you get started, here's some prompts you might use:

"My Enterprise CTO will never go for something named..."

"I just think the name sounds really disgusting and off-putting..."

"Marketing a product is at least as important as making a product, and this is bad marketing..."

elvinyung 3 hours ago 4 replies      
I still don't understand why CockroachDB doesn't just use Spark as the compute layer. Spark seems perfectly capable as a platform for performing arbitrary distributed computations on distributed data sources. It even has a full-fledged distributed SQL engine!

As it stands, it seems to me that CockroachDB is mostly just reinventing Spark from scratch, except maybe from a more OLTP-centric perspective.

Broadpwn Bug Affects Millions of Android and iOS Devices bleepingcomputer.com
44 points by ivank  12 hours ago   28 comments top 9
kalmi10 1 hour ago 0 replies      
Apple patched it in iOS 10.3.1 according to this report:http://cert.europa.eu/static/SecurityAdvisories/2017/CERT-EU...
pmontra 6 hours ago 4 replies      
> Users that didn't receive this month's Android security patch should only connect to trusted Wi-Fi networks

Turning off Wi-Fi before leaving home and office helps. Apparently few people do that. A customer in the tracking industry (beacons estimating people in stores) told me that about 80% leave Wi-Fi always on.

I hope I'll get that patch soon. The last update for my Sony phone was the security update of May. Nothing on June. I guess that most Androids didn't and won't get anything.

wyldfire 5 hours ago 1 reply      
> BCM43xx family of Wi-Fi chips included in "an extraordinarily wide range of mobile devices" from vendors such as Google (Nexus), Samsung, HTC, and LG.

It would be great if there were a published list of exactly which devices are vulnerable, or a way to check your device for whether this part was present. Is there anything like 'adb shell lspci' I could run to find out whether my devices have the broadcom parts? I know my Nexus 5x has a QCOM SoC, so I assume it lacks broadcom WiFi. But the rest of the family's devices -- what of those?

vvanders 5 hours ago 0 replies      
> The attacker doesn't need any user interaction to exploit the feature. A victim only needs to walk into the attacker's Wi-Fi network range.

> In its security bulletin, Google rated Broadpwn as a "medium" severity issue, meaning the company doesn't view it as a dangerous vulnerability, such as Stagefright.

Wait, really?

Black-Plaid 7 hours ago 1 reply      
> Artenstein has later confirmed on Twitter that connecting to a malicious network is not necessary.

> Users that didn't receive this month's Android security patch should only connect to trusted Wi-Fi networks and disable any "Wi-Fi auto-connect" feature, if using one.

What is the point of the second statement?

robin_reala 8 hours ago 2 replies      
There is no information on the status of this bug for iOS devices.

That slightly contradicts the headline.

Kikawala 6 hours ago 0 replies      
More details as well as how to trigger the bug and what devices have been tested against it: http://boosterok.com/blog/broadpwn/

We will know a lot more after @nitayart presents at BlackHat.

swiley 2 hours ago 0 replies      
I wonder if they're patching the firmware or just the Linux driver for the chip.
jmole 7 hours ago 0 replies      
Funny that Broadcom's Wifi business was just sold to Cypress last year.
IPFS and Filecoin Token a p2p decentralised replacement for HTTP avc.com
52 points by Osiris30  13 hours ago   5 comments top 2
jondubois 51 minutes ago 0 replies      
It's an interesting idea to have a coin that is backed by something that has real tangible value in the industry (storage space)... I'd like to know whether or not the intrinsic value behind Filecoin (in terms of $ worth of storage space) will be worth the cost of operating the network (in $ terms).

As a buyer of storage space, if you need storage, isn't it always going to be cheaper, faster and more convenient to use your own hard disks or a specialised service like S3?

Will people actually have an incentive to use Filecoin as a storage service instead of S3? If they don't, then the coin has no advantages over Bitcoin.

StavrosK 4 hours ago 1 reply      
I find IPFS very interesting, and I actually just created an IPFS pinning service with a friend (https://www.eternum.io/). I hope Filecoin will be as well-run as the IPFS project is, it certainly looks very interesting.
Index Search Algorithms for Databases and Modern CPUs (2010) [pdf] arxiv.org
50 points by tjalfi  14 hours ago   5 comments top 2
gravypod 11 hours ago 0 replies      
I can't wait for for spatial databases with GPU acceleration. Things like massively parallel collision checking for complex shapes become relatively fast when you have a GPU.
flgr 10 hours ago 3 replies      
Original author here; was quite surprised to find this on the front page today.

Let me know if you have any questions. Happy to answer them!

Architect explains why large development in LA seems to be luxury development reddit.com
42 points by intull  6 hours ago   13 comments top 7
RealityNow 27 minutes ago 0 replies      
It blows my mind that virtually every major city in the developed world is facing this same problem, yet nobody is doing anything about it.

It's a pretty simple problem with a pretty simple solution. The problem is that local city councils have restricted the freedom to build through excessive zoning laws and regulations in order to increase housing prices for their own private investment benefit.

The solution is to relinquish them of this self-interested tyrant-like overbearing power and set these policies on the national level - basically how Japan does it. The more localized the power, the more self-interest is going to favor a minority of private individuals at the expense of society.

ThePadawan 2 hours ago 1 reply      
From my European point of view, I find the whole hangup about the parking space regulations very interesting.

Here in Zurich, there are the same sort of complaints about parking for new buildings going up, however there is now a different trend: Because rent for the parking space is typically charged separately from the apartment's rent, some parking space simply can't be rented out because residents don't have cars.

In case you speak German and are interested in this sort of stuff, the regulations are available at https://www.stadt-zuerich.ch/content/dam/stzh/portal/Deutsch... .

It also shows a table on page 3 that explains how you actually are allowed and required to build less and less parking spaces the closer you get to the city center, so much so that if you look at the maps on pages 6 to 7, you can see that that grey area allows <= 10% of the parking of the white area.

dankohn1 4 hours ago 0 replies      
The parking minimums are destroying our cities. Nevertheless, even luxury developments lead to more affordable housing do to filtering: https://www.vox.com/cards/affordable-housing-explained/filte...
givemefive 4 hours ago 0 replies      
they only build luxury developments in dallas too.. it's not exactly rocket science as too why.. the land is expensive and they need a ROI.

they take care of parking with prefabricated 5 story garages.

thecopy 4 hours ago 1 reply      
Government policies and regulations working against what their benefactors originally planned for, where have i seen this before?
pbreit 4 hours ago 3 replies      
I'm confused. $165/sf is very inexpensive ($200k for the 800sf condo with parking). Even $200-300/sf is not bad.
gxs 3 hours ago 0 replies      
My dad is in construction and I thought I'd chime in with another practical reason.

Once you're building at larger scales, the cost of materials between a luxury condo and a subluxury condo are different, but not astronomically so.

Developers will go to China, Mexico, etc. and source some really nice stuff very cheaply. Sure, there are exceptions, i.e., materials that are expensive no matter what, but once you figure out a way to use cheap labor, the actual building materials are cheap.

It's similar to luxury cars - the "luxury" part doesn't necessarily cost a lot more (but yes does cost a bit more) but it can be marked up a lot, lot more.

Show HN: A notebook-style Common Lisp environment github.com
149 points by cddadr  17 hours ago   74 comments top 6
fiddlerwoaroof 15 hours ago 1 reply      
This is really cool: now, I'll have a reasonable way to introduce my coworkers to lisp without making them learn emacs at the same time. Anyways, would it be possible to include something like parinfer or paredit here, as well as some reasonable keybindings for them. In vim mode, I find these keybindings extremely ergonomic (loosely based on slimv's settings): https://github.com/fiddlerwoaroof/dotfiles/blob/master/emacs...
etiam 1 hour ago 0 replies      
Is anyone aware of a way to get notebook style Lisp inside Emacs? Along the lines of what EIN does for Python?

I really like the notebook format but I've yet to come across a browser window so good that I'm happy to give up an actual editor program for it.

math0ne 12 hours ago 0 replies      
Anyone know of a list or resource that has collected all the of this type of notebook IDE's? I know of a few but I'm sure there are more out there.
21 17 hours ago 13 replies      
Can somebody comment on the status of Lisp in 2017?

If one were to learn a functional language, is Lisp a good choice today? Or is Haskell more appropiate?

brian_herman 9 hours ago 1 reply      
Anyone have any idea on how to get this working with debian?
kazinator 15 hours ago 0 replies      
> by cddadr

My TXR Lisp actually has that function. :)

Oops, I mean accessor.

Steve Jobs and the Missing Intel Inside Sticker kensegall.com
115 points by drawkbox  11 hours ago   61 comments top 15
sblank 3 hours ago 4 replies      
The Intel Inside campaign wasn't just a consumer branding strategy. First and foremost it was a predatory marketing campaign that turned into exclusionary behavior. PC firms that used Intel chips and put Intel Inside on their PC's were given funds to use in advertising and were reimbursed for "marketing expenses". In reality these marketing funds were actually a subsidy/discount (some would say kickback) on Intel chips. As Intel's power grew they would only give the PC manufacturers rebates if they would buy 95% of their Microprocessors from Intel. If they used AMD or other microprocessors - all the Intel rebates would disappear. By the end of the 1990s, Intel had spent more than $7 billion on the Intel Inside campaign and had 2,700 PC firms locked up. By 2001 these rebates were running $1.5 billion a year.

Intel was sued in Japan (for offering money to NEC, Fujitsu, Toshiba, Sony, and Hitachi,) in the EU (for paying German retailers to sell Intel PC's only) and in the U.S. for predatory (pricing), exclusionary behavior, and the abuse of a dominant position (HP, Dell, Sony, Toshiba, Gateway and Hitachi.) The legal record is pretty clear that Intel used payments, marketing loyalty rebates and threats to persuade computer manufacturers, including Dell and Hewlett-Packard (HP), to limit their use of AMD processors. U.S. antitrust authorities have focused on whether the loyalty rebates used by Intel were a predatory device in violation of the Sherman Act. The European Commission (EC) brought similar charges and imposed a 1.06 billion Euros fine on Intel for abuse of a dominant position.

The sum of these efforts not only killed competitors but it killed innovation in microprocessor design outside of Intel for decades.

Ironically Intel's lack of innovation in the 21st century is a direct result of its 20th century policy of being a monopolist.

habitue 9 hours ago 3 replies      
I guess I thought this might actually tell the story of how Apple negotiated not to have the Intel inside sticker. Instead it just states that the sticker wasn't there and goes "we can only guess"
ryandrake 5 hours ago 5 replies      
I just had a look at a newish Lenovo sitting next to me and it's packed with gaudy logos stuck to the inside hand-rests:

* Intel Inside

* AMD Radeon graphics

* Energy Star

* 2x JBL speakers (two mentions of JBL, one's not even a sticker)

* Dolby Digital Plus

...and a few others that depict generic features of the laptop (Do I really need a sticker to tell me I have a webcam on this thing?) Honestly it just looks tacky, like a Nascar car. I'll peel them off some time but yuck, totally tasteless.

noonespecial 8 hours ago 2 replies      
Steve was right. It really set the MacBooks apart when all of the other computers were literally festooned with a dozen cartoonish stickers all over making them look like cheap toys.

Worse still, as you used the computers in real life, all of those stickers degraded into a gluey mess that got all over everything when you touched them.

I still have flashbacks of using a heatgun and alcohol wipes to un-sticker 2 dozen new HP laptops before rolling them out. Ugh.

justboxing 9 hours ago 3 replies      
> I approached him with my biggest concern: Please tell me we wont have to put the Intel Inside logo on our Macs.

> With a big grin, Steve looked me in the eye and said, Trust me, I made sure thats in the contract.

Isn't that all there is to it? If you don't want an "Intel Inside" sticker slapped on your computer, you negotiate it in the Contract.

Was Intel that aggressive that they wouldn't sell the chip unless you slapped their sticker on your computers?? What am I missing?

remir 7 hours ago 0 replies      
Apple switching to x86 was great publicity for Intel. It was a big deal. People, blogs, mass media talked about it a lot.

Because of this, I'm sure Steve negotiated a good price on those chips without Apple needing to be part of the "Intel Inside" program to get cheaper CPUs.

profmonocle 6 hours ago 1 reply      
Something I'm more curious about is how they got out of putting carrier branding on the iPhone. As others have said, the Intel Inside program wasn't mandatory. But from what little I've heard about this, AT&T was very reluctant to concede this when the iPhone debuted. I wonder what concessions Apple had to make, and if the initial exclusivity had anything to do with it.
robin_reala 8 hours ago 0 replies      
Dell tried this with their Adamo line but couldnt go as far as removing it entirely; they ended up laser engraving it onto the bottom so it wouldnt be as gaudy: https://www.ifixit.com/Guide/First-Look/Dell-Adamo/719/1#s38...
phmagic 4 hours ago 0 replies      
Apple has similar strict guidelines for Made for iPod devices.
quicklime 2 hours ago 0 replies      
Am I the only one who thinks that the three ads mentioned in the article (Snail, Burning Bunny, Steamroller) are incredibly tacky?

The Apple ad from that era that people love and remember is Richard Dreyfus' Crazy Ones, and the author even thinks that they "upgraded to Jeff Goldblum".

ginger123 5 hours ago 1 reply      
Did Ken Segall work in Apple during the time of iMac ? According to his linkedin Profile he was a consultant for Apple between 2005 - 2008. iMac was introduced in 1998.
olivermarks 2 hours ago 0 replies      
Jobs was a genius at finding and employing the best ad agencies/talent and letting them work their magic. It is a large part of his mystique
empressplay 5 hours ago 1 reply      
This article is not quite correct about why Apple switched to Intel. IBM was unable to provide a G5 chip suitable for laptop use. That's the whole reason in a nutshell.
vacri 7 hours ago 1 reply      
This is a very long-winded way to say "We tried one tactic, they didn't take the bait, so we suckered them by buying their product". It's painted as an Apple victory because there's no sticker on the laptop, but not as an Intel victory, despite Apple switching to their chips. Weird.
48-Year-Old Multics operating system resurrected slashdot.org
19 points by MilnerRoute  4 hours ago   4 comments top 4
DrScump 3 hours ago 0 replies      
Some may not be aware that UNIX was named as a sort of take-off on Multics.


Theodores 48 minutes ago 0 replies      
24 year old website resurrected - not seen a link to Slashdot in a long time.
Animats 3 hours ago 0 replies      
Aw. Finally. I wonder if some of the Multics fans I once knew are still alive to see this.
oldmancoyote 4 hours ago 0 replies      
Having been compelled to program Multics, I am appalled that anyone would want to resurrect it. One of the happiest days of my life was when my employer got rid of it. I think they gave it to Iran. : )
Google is funding the creation of software that writes local news stories techcrunch.com
162 points by tokyoSurfer  17 hours ago   117 comments top 28
11thEarlOfMar 17 hours ago 10 replies      
I ran across this article while researching a stock and as I read, I kept thinking, "This was not written by a person. This was written by software." [0]

I checked the attribution, and there is a person's name on it. Sure, any hack can write and publish and this is probably just another example. But the odd style doesn't even strike me as 'writing the way I think' or writing and publishing quickly without editing. For example, from the 2nd paragraph, "The corresponding low also paints a picture and suggests that the low is nothing but a 97.89% since 11/14/16." I can't gather any meaning from that statement, yet it has oddly specific details.

I am not glad to see this trend and not glad that Google is embarking on this path. I suppose it is inevitable, but unless there is expertise built into this AI that can extract meaning from data on my behalf and present it in a way that is more insightful and interesting than I am, it will become yet another source of chaff I'll have to filter.

Can we at least, please, flag AI generated prose as such?

[0] https://www.nystocknews.com/2017/07/05/tesla-inc-tsla-showca...

reallydattrue 16 hours ago 7 replies      
Very Relevant: https://www.youtube.com/watch?v=K2Ut5GqQ1f4

Google will one day be the arbitrators of news. If it doesn't fit in their world view, whether it's true or not. Will be removed from the results.

I think now is the time to setup a different model and remove their monopoly. Internet freedoms are at stake here.

Do no evil? Yeah right.

jimrandomh 16 hours ago 2 replies      
I would strongly prefer that robo-written news not exist, not appear in the results of any searches I make, and not appear in any feed that I read. It is pollution that makes real information and insight harder to find. Does anyone actually like this stuff?
adorable 10 hours ago 0 replies      
What would those article-writing robots use as their primary source of information?

If they write local news, will they use social media as their datasource? Other sources?

gumby 14 hours ago 0 replies      
The irony is that the I how radio news got its start. Ronald Reagan became an actor after being a "sports announcer" -- what he really did was read the ticker tape of a game in progress ("smith at plate. first ball strike no swing. Second ball base hit") and create an exciting story to go with it "And smith steps up to the plate. He flexes his muscles, kicks the ground and takes his stance. He passes on the first ball....strike! Here's the next pitch...he swings...solid towards third base. Is it a foul? NO!! AND HE'S SAFE ON FIRST BASE!!!"

Really most "news" articles are only a couple of paragraphs long anyway and could be expanded or contracted on the spot to match the interest of the reader.

andy_ppp 3 hours ago 1 reply      
I think the disgust factor will go away in a few years (maybe less) when most content is written by machines with slanting that the models say you will enjoy. Or that will cause you to spend money. Or click ads.

You think you won't succumb to their influence now, but it'll happen and there will even be "journalists" who are machines that you like. The filter bubble will completely adapt to your every need to make you feel fantastic about reading their copy, humans won't be able to compete.

ams6110 15 hours ago 1 reply      
Don't most reporters start out with obscure/niche stories so they can hone their writing styles on relatively "unimportant" or filler pieces? If machines do all of that work, how do reporters develop the experience to be able to write an organized, in-depth important story?
methodin 12 hours ago 1 reply      
Random thoughts:

 * Facts delivered with arbitrary fluff words is pointless even when written by a human - it obfuscates the real purpose which is the data * Companies pay humans to deliver articles in most cases and the bias of the writer or the institution that paid for it shines through. I cannot find a real difference between intentional angling by payment or by algorithm * When the day arises where computers could generate actually new, intelligent and thoughtful pieces I for one will be very interested in reading them. Sadly there would be millions of variations that could occur at an astounding pace. We'd then need algorithms to filter the generated content for the things that are really noteworthy. * News at its core is a sequence of facts which begs the question if we really need the cruft around those facts which can often lead to misinterpretation?

tannhaeuser 3 hours ago 0 replies      
What about developing a counter-bot that detects and flags algorithmic content?

Edit: come to think about it, isn't it what Google should be rather doing?

cwp 4 hours ago 0 replies      
Ugh. The last thing the world needs is more formulaic news stories. We need to move past the idea that the web is a virtual newspaper.

News sites don't even use hyperlinks effectively, let alone audio/video/interaction. We should use AI to replace newspapers, not reporters.

tyingq 8 hours ago 0 replies      
Somewhat ironic as Google has been fighting link spammers that use autogenerated content for years. Software like this is popular in that space: https://wordai.com
downandout 12 hours ago 1 reply      
Ironically, the Adsense "valuable inventory" policy prohibits showing Google ads on automaitcally generated content [1]. I wonder if they will follow their own rules and refuse to show ads on content generated by this tool.

[1] https://support.google.com/adsense/answer/1346295?hl=en

akadien 11 hours ago 0 replies      
Google is the problem. I thought they didn't want to be evil.
speeq 13 hours ago 0 replies      
I recently found a YouTube channel with news videos that seem generated mostly programatically with a robot voice over and a combined +44M views on the channel:


I wonder who's behind these and similar channels.

endswapper 16 hours ago 2 replies      
This submission is at least tangentially relevant: https://news.ycombinator.com/item?id=14673489.

Combining these presents an interesting opportunity to create "future news" (news that is technically fake until it isn't) thereby owning the news cycle by always being first.

kronos29296 16 hours ago 3 replies      
What guarantee is there that the published news isn't fake? This might start something like viral fake facebook posts. We already have enough of those. Now we have automated fake news generator where you post your own fake news for free.

This is what it will become one day. Hope they have something to stop it.

fiatjaf 12 hours ago 0 replies      
Why? This is horrible. Why not just publish the raw data reporters got?
mc32 16 hours ago 1 reply      
At least in the near future, this has the potential to make facts-and-figures based news less biased (less influenced by author idiosyncrasies). Personally, I would rather news not be laden with personal flourishes that authors add either as filler or due to personal opinion.

I do imagine further into the future, the automated systems will be "improved" with tone and bias to better fit the tastes of the individual reader, to the detriment of us writ large.

kevinphy 15 hours ago 0 replies      
A relevant and inspiring project with the statement:

"Only Robot Can Free Information"


Focusing on building robot for reader instead of news provider would be the future.

divbit 15 hours ago 0 replies      
For the reporters friends I have, not sure how I feel about that - if I was a reporter, I feel I would want some software which enhances and improves my job experience and reporting ability, rather than flat out replaces it. (Not to criticize google, I'm sure any company startup, could be doing the same).
mnglkhn2 13 hours ago 1 reply      
The question is: How are those news items going to be named: Robo news?

Or maybe "fake news", until 'elevated' by Google curators?

Maybe Microsoft's Ai bot experiment might offer a cautionary tale.

zzalpha 16 hours ago 0 replies      
Having just finished a play-through of Deus Ex: Mankind Divided, this immediately makes me think of Eliza Kassan... it really is odd how many ideas in that game don't seem especially far-fetched these days.
chrismealy 16 hours ago 1 reply      
IIRC there was a small town paper in the early in 1990s that wrote high school sports stories with a HyperCard stack.
velobro 16 hours ago 0 replies      
Good! I'm sure a bot is a lot better writer than the high school graduates my local paper employs.
apeacox 16 hours ago 0 replies      
Welcome to the Ministry of Truth
DanBC 15 hours ago 0 replies      
There are 2 things I hope google or other AI companies focus on.

1) Making board papers more readable. There's a bunch of trusts in the NHS who have a stream of very complex board papers. Something to reduce un-needed complexity would save a lot of time and potentially money.

2) Converting all important documents to an Easy Read version. There are a bunch of writing styles for people with learning disability, low IQ, or low literacy. Easy Read is one. A company like Google focusing on this would be good because they'd improve the evidence base; they'd bring a bit more standardisation; and they'd improve access to information for many people.

Kenji 14 hours ago 0 replies      
Without machines acquiring true understanding of what is happening, this is going nowhere. I applaud their effort but it is misguided.
mtgx 16 hours ago 2 replies      
So what happens when these bots are manipulated into writing fake news (in the same way the way better funded Google search is still manipulated for SEO purposes) ?
Loudness (2007) chicagomasteringservice.com
121 points by mjgoins  15 hours ago   70 comments top 12
chrismealy 8 hours ago 2 replies      
It's not just about volume. Tracks mastered with a ton of compression trick your brain into sounding louder than they really are, which is great for a song or two (or if you want pay people to pay attention to your tv commercial), but for listening to a whole album it wears you out. If you have an album you love but somehow never make it all the way through this is probably why. The perception of full volume gets your lizard brain aroused, which is great if you're in the club, but not if you're in the mood to listen to the first three Led Zeppelin records in a row.

On the other hand, older recordings with more dynamic range might sound thin at low volume, but are much richer at higher volumes (you can hear the individual instruments better and feel the space in the sound). If you try comparing older and newer masterings at a good volume the newer mastering usually sounds kinda mushy.

em3rgent0rdr 5 hours ago 1 reply      
Fortunately many audio providers now have been fighting back against this. For instance YouTube will punish video uploads that are louder than -13 LUFS by attenuating the level. This will provide a somewhat level playing field and encourage people to upload with a reasonable dynamic range.
mortenjorck 12 hours ago 11 replies      
For practical listening, I actually prefer modern brickwall mastering techniques to more traditional mastering with a high dynamic range, for one reason: what the author sees as "hijacking the volume control from the listener" I would consider the opposite.

With a high dynamic range, a headphones listener may feel the need to adjust the volume several times in a song to boost the clarity of softer sections or to make louder sections more comfortable to the ears, depending on the listening environment. With a "loud," low dynamic range, however, the listener need only adjust the volume once, as the whole track is roughly the same volume. In other words, the listener is in control of the volume, rather than the engineer.

morecoffee 13 hours ago 2 replies      
Tangential, but for the longest time I couldn't figure out why VLC always played music / DVDs at such low volume. Setting the system volume to max and overdriving VLC's volume slider was the only way I could actually hear the soft parts.

Recently I found out about the volume compressor, which with a single check box does exactly the right thing. I asked myself "why the heck isn't this box checked by default?" I think the answer is with audio purists wanting to stem the loudness war.

When reading about CD mastering maxing out the volume, It seems like it is the right decision. Most people do want the loudest setting, no mess with the EQ, compressors, etc. Only a tiny population wants to preserve the fidelity of the amplitude.

cromon 10 hours ago 0 replies      
This is something I have battled with for a long time, personally I master to make the music sound good, sometimes that is loud (new dance music) sometimes not so much (old African recordings remasters).

Coupled with the data from this page [1] there is no point in going too loud anyways, that's why you have gain / volume control. I'm not sure how I feel about streaming services implementing extra processing tbh. Spotify is the worst culprit adding limiting which can significantly change the sound of a recording.

I just wish other engineers would have more pragmatism in this industry, way too much overcooked and distorted music around.

[1] http://productionadvice.co.uk/online-loudness/

thatswrong0 10 hours ago 2 replies      
This is pretty funny to me because I mostly listen to (and produce) electronic music.. and there are pretty much no rules when it comes to electronic music and loudness. Stupidly loud music can actually sound pretty dang good [0][1]. The momentary RMS in some Moody Good tracks can actually hit _above_ 0dBFS.

If you have the right source material, you can brickwall the hell out of tracks and not notice the distortion.. or perhaps the distortion will even add pleasant artifacts. One of the more prominent issues with making things stupid loud is intermodulation distortion, but that really only becomes noticeable when you have pure tones or vocals being mashed into the limiter. If the source material is already distorted (think screechy dubstep synths), then it probably don't matter.

But yeah, when you're dealing with more traditional kinds of music, which often times involves vocals or a lot more subtlety to the timbre of the instruments, brickwalling is probably not the best call. It seems that the Search and Destroy "remaster" sounding terribly distorted was intentional.. but IMO it's not very listenable nor does the distortion really bring the grungy character than I think they were going for. It just sounds bad.

[0] https://soundcloud.com/moodygood/mtgfyt-vol1

[1] https://www.youtube.com/watch?v=5lsX8pUaloY

ttoinou 10 hours ago 1 reply      
Theses mastering techniques have their legitimate use ! Some notes :

* it's a step in musical production where having experience, skills and contact with the artist matters. Not all compressor and limiter are created equal and the default value you use in your media player may not sound as good as what an audio engineer might have done..

* Not everyone have good hardware and a good environment to listen to high dynamic range music like thoses listening to classical music / jazz / Philip Glass, so theses business decision to increase volume for the market made sense at that time I think. Audio engineer simply took profit of having a technically better medium (CD) to make audio sound better (from what I've read theses techniques did not work well on vinyl)

* Loudness wars didn't have an effect on old records since as one can see in this article, we could find the old dynamic ones (and so we actually have the choice of listening to the old untouched record, or the new compressed-for-the-market record, and that is a good thing !)

* Theses music stats (mean RMS, peak RMS, max mean RMS) look at instantaneous dynamic, but a look at the overall dynamic of a song is also very important ! A good article on this topic stating that songs did not lose overall dynamic range that much : ['Dynamic Range' & The Loudness War, 2011] http://www.soundonsound.com/sound-advice/dynamic-range-loudn...

quakenul 9 hours ago 0 replies      
Mastering is in a way comparable to capitalism. You can certainly try and be nice about it but pushing harder will usually get you further, because most people care more about one aspect than all the others aspects combined. In capitalism it is getting a great product for a great price. In mastering, it is loudness.

Loudness is a bastard. There is a reason, why all the pros are usually very, very careful about level matching when doing any sort of audio comparisons. Even when you know that louder can easily fool you into thinking something is better (which most listeners don't), you're still susceptible, if you don't counter act it. Wanna convince a recording artist in the studio it's great? Turn up those big speakers. Instant gratification.

When it comes to music consumption I like to think this is not really a problem: The sound of compression and distortion is the sound of current music and there is nothing inherently bad about it. Older generations will tend to oppose any new musical trend for various reasons, which all end up being subjective. The younger generations that grow up on this new sound do not care about brickwall limiting, because there is nothing to fucking care about.

Music production has been and forever will a mix of mostly people copying other people and flowing with the stylistic currents while adding a little something themselves. Sometimes something radical will happen. Mostly not. If you wanna stay relevant you go with the former and keep reaching for the later. Pretty much the same, as with coding or design.

amelius 13 hours ago 3 replies      
Was there any way we could have prevented the "loudness war"?
mrlucax 7 hours ago 0 replies      
>but for listening to a whole album it wears you out.

Looking at you, Death Magnetic.....

golergka 11 hours ago 1 reply      
Since Youtube and most streaming services started to automatically balance tracks based on their average perceived loudness (not all of them use the same metric, but the purpose is the same), loudness war is almost dead. If you brickwall your song, it will not be played louder than competition anymore.
krallja 10 hours ago 1 reply      
2007, if Internet Archive is to be believed https://web.archive.org/web/20070808082932/http://www.chicag...
Before 1948, LA's Power Grid Ran at 50hz gizmodo.com
179 points by curtis  15 hours ago   142 comments top 13
Animats 14 hours ago 2 replies      
Until the 1990s, Grand Central Station in New York had almost everything - 60Hz commercial power, 40Hz LIRR power (Pennsylvania Railroad standard) 25Hz NYC Subway power, 700VDC Metro North power, 600VDC subway third rail power, and some old Edison 100VDC power. There was a huge basement area full of rotary converters to interconvert all this. Various ancient machinery and lighting ran on different power sources.

In the 1990s, Grand Central was rewired, and everything except railroad traction power was converted to 60Hz. All conversion equipment was replaced with solid state gear. It took quite a while just to find everything that was powered off one of the nonstandard systems.

It wasn't until 2005 that the last 25Hz rotary converter was retired from the NYC subway system. (Third rail power is 600VDC, but subway power distribution was 13KV 25Hz 3-phase.)

shagie 9 hours ago 1 reply      
For some other fun with California's 60hz legacy...

The timezone database (maintained by the people who are very particular about making sure that a time specified is a well known time) have a note in the northamerica data file:


# From Paul Eggert (2016-08-20):

# In early February 1948, in response to California's electricity shortage,

# PG&E changed power frequency from 60 to 59.5 Hz during daylight hours,

# causing electric clocks to lose six minutes per day. (This did not change

# legal time, and is not part of the data here.) See:

# Ross SA. An energy crisis from the past: Northern California in 1948.

# Working Paper No. 8, Institute of Governmental Studies, UC Berkeley,

# 1973-11. http://escholarship.org/uc/item/8x22k30c

lb1lf 15 hours ago 1 reply      
Incidentally, Japan faces this very issue to this day; part of the country run on 50Hz, the rest on 60Hz.

This made matters trickier after Fukushima, as the nation is effectively two smaller electricity grids, not one large one - so making up for the shortfall became harder than it could have been. (However, there's a massive frequency converter interface between the two grids.)

Edit: Aw, shucks - now that I revisit the article, I see the exact same points being made in that article's comment section. My bad.

ryandrake 12 hours ago 11 replies      
My mind always boggles at humanity's general inability to standardize on one thing without great pain and fighting. Whether it's Metric vs. Imperial, Beta vs. VHS, Blu-ray vs. HDDVD, OpenGL vs. DirectX, USB speeds, power connectors, instant messaging protocols. Nobody can just sit together and cooperate--we always have to go through that painful period with multiple incompatible standards that fight it out until (hopefully) one of them wins.
plorg 14 hours ago 1 reply      
It took a long time to standardize and integrate the US power grid (which even today is basically 3 loosely-connected systems). Some parts held out longer than others.

My brother recently visited a hydro dam in northern Minnesota that had one turbine operating at 25hz even as recently as the 90s, serving at least one industrial customer still running equipment that predated the interconnected 60hz grid.

seandougall 14 hours ago 4 replies      
Fascinating -- I'd always heard that 24 fps developed as a US film standard compared to 25 fps in Europe because of the difference in AC power frequency (24 and 60 being a pretty straightforward integer ratio). And yet, during this time, LA became firmly entrenched as the center of the American film industry while producing 24 fps films. I wonder how that squares -- was this something people had to deal with, or does this article possibly overstate how widespread 50 Hz power was?
janvdberg 13 hours ago 1 reply      
This 99% invisible episode also talks about this (third segment):http://99percentinvisible.org/episode/you-should-do-a-story/
wolfgang42 13 hours ago 2 replies      
I had a Waring Blendor [sic], cat. no. 700A, with the widest range I've ever seen: "115 Volts, 6 Amps, 25 to 60 cycle A. C. - D. C." I haven't been able to pin down an exact date on this model, but it seems to date from the 1940s or so, when the U. S. power grid still hadn't completely settled on a standard. I've read that portions of Boston still had 110 volts DC in residential areas up through the 1960s, though I've been unable to find much detail about this.
Aloha 15 hours ago 5 replies      
Growing up in Southern California, I remember always finding old clocks with conversion stickers on them - I've been looking for a good source on the technical details to find out what they needed to do to accomplish the changeover. I'm not willing to pay 35 bucks to read the IEEE article however.
kens 9 hours ago 0 replies      
The story of why parts of the US used 25 Hertz power instead of the standard 60 Hertz is interesting. Hydroelectric power was developed at Niagara Falls starting in 1886. To transmit power to Buffalo, Edison advocated DC, while Westinghouse pushed for polyphase AC. The plan in 1891 was to use DC for local distribution and (incredibly) compressed air to transmit power 20 miles to Buffalo, NY. By 1893, the power company decided to use AC, but used 25 Hertz due to the mechanical design of the turbines and various compromises.

In 1919, more than two thirds of power generation in New York was 25 Hertz and it wasn't until as late as 1952 that Buffalo used more 60 Hertz power than 25 Hertz power. The last 25 Hertz generator at Niagara Falls was shut down in 2006.

Details: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4412948

mysterypie 12 hours ago 3 replies      
> customers could bring their old 50hz appliances for free adjustments and exchanges, [including] 380,000 lighting fixtures

Surely ordinary light bulbs don't care about the frequency. Do they mean the electronics for fluorescent lamps? Were those common in the 1940s?

gumby 14 hours ago 0 replies      
There are other, smaller countries that have mixed frequencies.

PG&E (California' primary gas and electric utility) still has DC tariffs, thought I believe they provision it by installing a converter at the pint of use. I believe this is just for elevators.

Parts of Back Bay in Boston were still wired for 100V DC mains voltage into the 1960s

agumonkey 14 hours ago 1 reply      
Modeling Agents with Probabilistic Programs agentmodels.org
94 points by abeinstein  15 hours ago   3 comments top
anarchy8 14 hours ago 2 replies      
A great source, but was creating a subset of Javascript really necessary?
GPU Performance for Game Artists fragmentbuffer.com
92 points by mnem  16 hours ago   4 comments top 2
midnightclubbed 12 hours ago 0 replies      
This is a great article for programmers too - maybe too much for non technical artists.

The article doesn't really go into how to make more performant art, and thats probably for the best - a lot of that will be game and engine dependent. It's really easy to 'optimize' in a way that hurts the engine (or for programmers to build/configure the engine and shaders in a way that is counter to the artists workflow or visual targets).

Show HN: ORC Onion Routed Cloud orc.network
81 points by sp0rkyd0rky  17 hours ago   25 comments top 7
kodablah 13 hours ago 1 reply      
Sorry I have not dug too deeply, but I have some questions.

1. Are there controls (i.e. proof-of-stake) to enforce equitable and lasting storage of your items on others' machines?

2. What is the consensus model for marking peers as bad actors?

3. What are the redundancy guarantees? That is, how many nodes store my data?

4. What is the "currency" of sorts that I must "pay" in order to store a certain amount? Amount of hard disk I contribute back?

5. Why was the AGPL chosen? Surely adoption by any means, commercial or otherwise, would be welcome in a system that has equitable sharing guarantees. Now if I want to implement your spec in my choice license, I can't even read your reference implementation.

Maybe some fodder for the FAQ. If not answered later, I'll peruse the whitepaper.

woodandsteel 13 hours ago 1 reply      
This looks like the opposite of IPFS in terms of what gets stored on your computer if you are part of the network.

On IPFS, something gets on your computer only if you decide to let it, and there are blacklists to automatically keep off material you don't want.

On ORC, it seems that encrypted pieces of everything get stored, so you can wind up with all sorts of things you don't want, but on the other hand might be able to deny legal responsibility.

simcop2387 7 hours ago 1 reply      
Based on what I'm reading here, it looks a lot like freenet. Does orc give any kind of privacy promises like it?
fiatjaf 15 hours ago 2 replies      
What is it? There's no explanation of what it is in the website, although there is a whitepaper, documentation, a tutorial and a description of the protocol.

From the whitepaper Abstract:

"A peer-to-peer cloud storage network implementing client-side encryption would allow users to transfer and share data without reliance on a third party storage provider. The removal of central controls would mitigate most traditional data failures and outages, as well as significantly increase security, privacy, and data control. Peer-to-peer networks are generally unfeasible for production storage systems, as data availability is a function of popularity, rather than utility. We propose a solution in the form of a challenge-response verification system coupled with direct payments. In this way we can periodically check data integrity, and offer rewards to peers maintaining data. We further propose that in order to secure such a system, participants must have complete anonymity in regard to both communication and payments."

philippnagel 15 hours ago 2 replies      
Interesting, does anyone know how this is related to storj.io?
woodandsteel 10 hours ago 1 reply      
The web page says files are encrypted and then split into chunks that are stored around the network, and then reassembled when you want to access your file. That sounds like no one could alter your file.

The webpage also says, "Redundancy is achieved through the use of erasure codes so that your data can always be recovered even in the event of large network outages."

Does this means files can't be lost, as long as you keep paying your bill?

trackofalljades 4 hours ago 0 replies      
Is this viral marketing for Silicon Valley?

Mostly kidding, but really, this is pretty close to "Pied Piper" isn't it?

The Matrix Cookbook (2012) [pdf] dtu.dk
82 points by nabla9  16 hours ago   8 comments top 5
imfletcher 10 hours ago 1 reply      
I clicked wondering what kind of food they were making from the movies.
ivan_ah 13 hours ago 0 replies      
Since we're on the topic of matrices and linear algebra, here is a tutorial on the basics: https://minireference.com/static/tutorials/linear_algebra_in...
kcanini 6 hours ago 0 replies      
This PDF literally got me through grad school. It's an amazing reference.
refrigerator 7 hours ago 0 replies      
Similar, but more "cheat-sheet" style: http://www.cs.nyu.edu/~roweis/notes/matrixid.pdf
ivan_ah 13 hours ago 1 reply      
This is an awesome resource that I keep coming back to again and again. Save this somewhere on your computer so you'll have it handy whenever you see some weird matrix derivative...
Linux tracing systems and how they fit together jvns.ca
225 points by ingve  1 day ago   50 comments top 12
616c 20 hours ago 3 replies      
I am a huge fan of jvns. I wish I can be her when I grow up.

Does anyone know of someone doing the same style of introspection tools, for tracing and profiling and networking, like the body of her work, not just this post, but for Windows?

I know a few scattered posts here and there, usually PFEs at Microsoft Blogs scattered, but the landscape of dedicated bloggers seems lacking to a novice like me.

bitcharmer 12 hours ago 2 replies      
I'm surprised Brendan Gregg hasn't been mentioned here yet. He's the Linux tracing/profiling god.


Don't get me wrong, I respect Julia Evans as a professional, but what she mostly does is simplify other people's hard work and in-depth analysis of difficult problems in various layers of the technology stack.

lma21 1 hour ago 0 replies      
Great article, sums up the linux tracing domain in a pretty neat way.I've used strace / perf / dtrace extensively since these are the only tools our clients' infrastructures can support, and it's always a bugger when you're on an older system and your hands are tied.Never tried eBPF yet, I should look into it once the kernel 4.7+ release hits RHEL
everybodyknows 16 hours ago 0 replies      
For those willing to get their hands dirty in low-level kernel code, the perhaps simplest tracing tool is dynamic-debug: https://github.com/torvalds/linux/blob/master/Documentation/...
0xcde4c3db 18 hours ago 2 replies      
I'm still not seeing the part where they actually fit together; it basically just looks like an accumulation of ad-hoc tools with no overarching concept. They look like they could be very useful tools, but there doesn't seem to be any architecture there.
kronos29296 14 hours ago 0 replies      
For a guy who knows nothing about tracing, it was a post that made me understand atleast some of it. The doodles really made me smile instead of a boring flowchart or diagram using a drawing tool. Another post going to my collection of interesting posts.
relyio 17 hours ago 2 replies      
You should mention it is the Ecole Polytechnique... de Montreal. Usually, when people say "Ecole Polytechnique" they mean the original one.

It's like saying you went to MIT (Minnesota Institute of Technology).

Other than this small nit, great article.

bjackman 15 hours ago 0 replies      
In my team we use a tool we developed called TRAPpy [1] to parse the output of ftrace (and systrace) into Pandas DataFrames, which can then be plotted and used for rich analysis of kernel behaviour.

We also integrate it into a rather large wider toolkit called LISA [2] which can do things like describe synthetic workloads, run them on remote targets, collect traces and then parse them with TRAPpy to analyse and visualise the kernel behavior. We mainly use it for scheduler, cpufreq and thermal governor development. It also does some automated testing.

[1] https://github.com/ARM-software/trappy

[2] https://github.com/ARM-software/lisa

dangisafascist 16 hours ago 4 replies      
I'm confused why BPF exists in the first place. Can't we just compile kernel modules that hook into the tracing infrastructure?

It seems like a webassembly for the kernel but local software has the benefits of knowing the platform it is running on. I.e. Why compile C code to eBPF, when I can just compile to native code directly?

I can potentially see it solving a permissions problem, where you want to give unprivileged users in a multi-tenant setup the ability to run hooks in the kernel. Is that actually a common use case? I don't think it is.

emilfihlman 17 hours ago 1 reply      
Your pictures are broken?
throwme_1980 15 hours ago 4 replies      
I am not sure what to think of these pictures, it triviliases the entire subject and makes it look child like, no engineer worth his salt will be seen with these doodles on his desk. Stop reading as soon I saw that
equalunique 20 hours ago 1 reply      
I should take it upon myself to get familiarized with all of these.
100 days of algorithms github.com
246 points by jpn  16 hours ago   13 comments top 7
vortico 12 hours ago 0 replies      
I'm really happy that GitHub supports rendering ipynb files so nicely. Makes it easy to glance at repos like this without cloning and firing up a Jupyter notebook.
cercatrova 9 hours ago 1 reply      
It seems like the algorithms aren't that complex, after all they have to be completed in one day. There might be more value in something like, 12 algorithms/side projects a year. Enough time is had in a month to actually develop something meaningful, not necessarily an entire side project but a deep understanding of a specific algorithm.
torbjorn 13 hours ago 1 reply      
How did you enumerate this list of algorithms to do in your self challenge?
Sherxon 15 hours ago 1 reply      
cool (y) I wonder how much time did u spend everyday
toisanji 8 hours ago 1 reply      
i want the list of algorithms ,this seems like a fun project to implement
Kenji 10 hours ago 1 reply      
Did he do this beside working 100% as a software engineer?
subhrm 15 hours ago 0 replies      
Thanks for sharing this.
       cached 9 July 2017 10:02:01 GMT