hacker news with inline top comments    .. more ..    2 Jan 2017 News
home   ask   best   2 years ago   
1
Why do traders in investment banks feel their jobs are immune from AI, etc? quora.com
98 points by aburan28  4 hours ago   88 comments top 23
1
Irishsteve 20 minutes ago 1 reply      
There seems to be some confusion going on about investment bankers and traders in the discussion.

Trading has been changing significantly since the 'big bang' when trading went from pits to electronic. From there on in you see the evolution of algorithm / program trading. This area has been using quants for decades at this point. There are a good few big names brands out there that are known for being 'algorithmic heavy', Man, Citadel, DE Shaw come to mind (I"m a few years out of date). That whole field has been open to introducing automation / algorithms to create a business edge and will probably continue to advance because its good for business. The profile of traders has also changed (Barrow boys versus PhDs)

Then I guess on the other side is investment banking such as m&a, equity and debt capital markets. Generally there its relationship based , juniors work on pitch books which from what I saw / heard were generally overlooked. This is potentially a lot harder to automate away. Then the bank would try to pull in some rain makers or grow them internally to land big deals. Usually these opportunities open up because their clients (Other companies) have learnt to trust the organization or at the least learn to expect a certain behavriour when enlisting their services.

2
vegabook 48 minutes ago 3 replies      
because traders are used to seeing such predictions fail.

Reuters was trading FX electronically since the early 1990s. At the tier one IB I worked for the IT budget was 500m USD a year (across products), and that was in 1997! Huge resources were thrown at automation. However, to this day, large trades in FX (> 10m USD notional) are still almost exclusively performed by humans over a telephone or over the bloomberg messaging system.

That's because, no matter how much you automate stuff, there is still the 1% "edge case" scenario where something goes wrong, and when that happens, you most definitely want a human that you can "look in the eye", when you have that sort of execution risk. Remember that markets move really fast and there is a lot of risk in big trades that "go wrong" because unwinding said trade will almost certainly cost one of the sides a fortune.

Also, high finance is not just about what you know. It's inevitably about who you know, about "illogical" factors such as salesperson charisma, entertainment, and most importantly, a credible personality type that understands the edge case risks. These things are very hard to replicate with a machine. You'll say they should be, that these things are unfair, but they remain a fact after many attempts at removing them have failed.

As for AI, let's for now call it what it is: machine learning. Learning from the past. That's fine for recognising stop signs at different distances, angles and degrees of noise. But in finance, the past is often misleading. Sure there's trend, but there are also very big instabilities in the historical correlation matrix. Paradigms shift without you even realising it. The constant is change. AI is not good enough at that, yet.

BTW, that's not to say machines are not making inroads. It's becoming almost impossible to get a decent trading job now with knowing at least R and Python to a comfortable degree, and good quant programmers cost a fortune. There's massive demand.

3
bboreham 2 hours ago 2 replies      
For around eight years my primary job function was to put investment bank traders out of a job, by automating what they did.

There were still humans in charge of the algorithms, but they moved more towards Python programmers than market traders.

Many of the "old-style" traders bitched about what we did, and most moved jobs to banks that were less advanced.

(I was in the interest rates line; typical trade size is $10M)

4
sverige 4 minutes ago 0 replies      
Well, an equally good question is why do programmers feel their jobs are immune from AI, etc.?
5
millstone 24 minutes ago 0 replies      
If the role of investment banking is to optimally allocate capital, then part of that job is research. Think Andrew Left's exposing fraudulent Chinese tech stocks, or the Lumber Liquidators controversy. Algorithms can augment this work, but cannot replace it.
6
ericjang 3 hours ago 5 replies      
Due diligence on the financials of a company (what investment bankers are supposed to do) is actually really hard to get right with the algorithms we have today. Much of the data and insight compiled by an I-banker today does not exist in an easily parse-able form for automated algorithms, and a substantial amount of the computation relies on common sense knowledge.
7
konschubert 2 hours ago 5 replies      
> "Would you trust purchasing a house from a seller, without meeting/talking to them, or a single person before and throughout the purchase?"

You mean I can get an unbiased look at a house in peace, compare the numbers, look at the plans, measure the humidity and do my due diligence without a sales person breathing down my neck?

Hell yea. I'd pay premium for that.

8
earthly10x 3 hours ago 1 reply      
The last few minutes of this documentary on Long Term Capital Management and its algorithmic failure will tell you why: https://vimeo.com/28554862
9
argonaut 3 hours ago 2 replies      
Because investment bankers are just salespeople? Nobody is suggesting AI will replace salespeople in the near future. I could see there being fewer grunt analysts in the future, though.
10
wslh 39 minutes ago 1 reply      
I just think that if AI could beat them, they were already replaced. Any innovation in trading is automatically implemented. May be this will be possible in the future but it doesn't depend only on deep learning techniques and having huge samples for learning because they have both.
11
bnmfsd 1 hour ago 1 reply      
Is there a fallacy name for this? i.e., asking a question that suggests something ("investment bankers have this feeling") as a premise, that may be not true.
12
wobbleblob 3 hours ago 1 reply      
Maybe with their work, labor costs are a relatively small portion of the product price?
13
jahnu 53 minutes ago 0 replies      
For similar reasons that spreadsheets didn't put accountants out of work.
14
bamurphymac1 1 hour ago 0 replies      
Honestly, maybe they just don't care? Rather, what should they do about it, what would we expect people with such huge earning potential right now to do other than push forward with the plan that works under the status quo.

Answering my own question: I'd expect bankers to save more (of their own money) in anticipation of the good times not lasting as long. More conservative types will weather the storm and spendthrifts will get wiped out.

In other words: business as usual, up until the very moment it isn't.

15
chvid 40 minutes ago 0 replies      
Because what they are doing is really (an indirect form of) sales.
16
yazaddaruvala 3 hours ago 3 replies      
Everybody feels their job is safe from automation. Its the same way planes crash but not mine.

Believing "I am special", is just built into us.

17
eva1984 50 minutes ago 0 replies      
A lot of them didn't understand technology.
18
allenleein 1 hour ago 0 replies      
Investment bank business is all about conncetion. Connections lead to deal. Machine can not replace that forever.
19
blazespin 2 hours ago 0 replies      
If it's true, it could be that there is more insider information being used than is generally recognized. Computers can't get that info.
20
progx 2 hours ago 0 replies      
Insider trading can not be automated ;-)
21
gozur88 1 hour ago 2 replies      
>No amount or greater sophistication of the algorithmic structures listed above, can replace genuine human nuance, interaction and trust.

I don't think this is true. Not at all.

22
edblarney 3 hours ago 0 replies      
I don't think bankers are naive, they are seeing it happen.

If your job can be done overseas, it probably will be outsourced.

If your job is algorithmic, you'll be replaced by a computer.

A lot of M&A activity though doesn't fit that category.

23
aresant 3 hours ago 2 replies      
HN title is significantly different than the question the submission actually asks which is "why do TRADERS in investment banks . . . " and should be updated
2
Three.js editor threejs.org
102 points by danboarder  4 hours ago   15 comments top 8
1
rl3 31 minutes ago 0 replies      
While I love Three.js, its editor has been around for almost as long as the project itself and has spent a large part of that time neglected. WebGL Studio[0] has a far more impressive editor, but its underlying engine is custom, so in most cases it isn't anywhere near as useful as Three.js due to the latter's ecosystem.

One thing I don't like about either editor is that they're web-based. While it makes a ton of sense on paper, I hate doing any serious 3D work in a browser window. Something like Microsoft's Language Server Protocol[1] but for graphical editors would be amazing. Run the project in a browser window while having bidirectional flow of data between a native desktop editor and the browser window.

Unfortunately if you want to run something like Three.js inside of a native desktop editor, you'd have to embed a web runtime. That really balloons project complexity, so I can see why so many people prefer web-based editors when making web-based projects.

One alternative, at least for 3D applications, is multi-platform frameworks that also work on the web. Oryol[2] in particular comes to mind. Hypothetically you could build a native editor around it, with no need to embed a runtime. The native editor's viewport would just use native graphics APIs for rendering. Then when you like what you see, just compile the same thing for the web. While some edge cases may not make it that easy, overall it seems to be a far superior workflow than having to deal with web-based editors or embedded web within native applications.

Unity 5 and Unreal Engine 4 both have incredible, native desktop editors that support exporting to the web. Unfortunately, they both have massive runtimes that make their web footprints a joke, among other problems.

[0] https://webglstudio.org/

[1] https://github.com/Microsoft/language-server-protocol

[2] https://github.com/floooh/oryol

2
zabast 1 hour ago 0 replies      
Try clicking one of the examples in the menu on the top, and then hit 'Play'. Love it! It is certainly great for playing around with WebGL, but still lots of work necessary if it wants to catch up to Unity or CopperCube and create complex WebGL games or scenes.
3
bhouston 22 minutes ago 0 replies      
If you want scene setup and polygon editing and animation creation try https://Clara.io and it exports to the threejs and fbx formats as well. Clara.io is similar to blender and Maya in terms of its features.
4
santaclaus 2 hours ago 1 reply      
I like that it supports the standard Maya keybindings by default.

Would be cool to render the scenes out! Has anyone (sucessfully) run Cycles through Emscripten? :)

5
djabatt 2 hours ago 2 replies      
unbelievably cool this all works so nicely in my browser.
6
hccampos 2 hours ago 2 replies      
While this is fun to play around with, and definitely helpful when using threejs directly, for a fullblown editor on the web there is nothing that beats PlayCanvas at the moment.
7
hossbeast 3 hours ago 1 reply      
Is there an example to play around with?
8
joeberon 45 minutes ago 0 replies      
Trying to edit scripts doesn't work for me
3
What Could Have Entered the Public Domain on January 1, 2017 duke.edu
398 points by shawndumas  12 hours ago   75 comments top 17
1
acabal 11 hours ago 4 replies      
I've mused on this idea elsewhere, but it's applicable here too:

I can't help but feel today's absurdly draconian copyright laws and lengths are going to make our century a terrible cultural black hole for our future-human descendents.

The vast, vast majority of creators who produce culture--books, movies, music, visual art, etc.--don't see widespread distribution, success, or fame in their own lifetimes. This is true today and it was true in the past too.

Today, we have people like Project Gutenberg, the Internet Archive, and others, volunteering their time and energy to lovingly archive, curate, and distribute out-of-copyright works. Many of these works get a cultural second life: people today can freely discover and read a book published before 1923, whose author might have died penniless and unknown in their day. Their work, their name, their ideas, and their legacy lives on 100 years after their death, even if the world they lived in overlooked them, because volunteers can freely update and distribute those works.

Can we say the same about the vast majority of work produced today, for whose creators don't attain widespread fame or cultural influence? Everything anyone produces, from a smudge on a piece of paper to the Great American Novel, now automatically gains a copyright that can last over a century, and that has almost never failed to be extended. As one of the many everyday creators who never see significant success, how would you feel if your work was not only ignored today, but for all time too?

Our future-human descendents are going to think culture from 1923-2100 consisted of almost nothing but the 400 comic book movies, the 75 Star Wars movies, and Harry Potter. Everything else will be too unprofitable for a megacorporation to distribute, and illegal for anyone else to.

2
Houshalter 9 hours ago 4 replies      
I wrote this on another thread a few months ago. It's in the context of paywalled science, but it applies more generally:

Copyright law is ridiculous.Nonfiction and scientific work should be treated differently than fictional works. I don't really care if Mickey Mouse goes into the public domain. But it's crazy that 100 year old scientific works can still be under copyright and illegal to distribute. These objectively have value to society, and the argument for the existence of the public domain is much stronger.

And why on Earth should copyright last last so long to begin with? How many works are worth anything after 10 or 15 years? I believe 99% of all works make 99%+ of their revenue in the first few years. Having copyright last a lifetime, let alone much longer, is just crazy. Creators benefit exponentially less for every additional year of protection. And only the very successful ones even benefit to begin with - the vast vast majority of works are just forgotten by that time.

Put a cost on renewing copyright. This is actually how it used to be. Half way through, you could pay a fee to have copyright extended. Very few people paid this fee (because most works aren't economically valuable), so most works went into public domain much sooner. Journals charge $30 to access obscure ancient papers. But I bet they wouldn't pay even $30 to keep the rights to those same papers.

Don't put everything into copyright by default. And again especially works of nonfiction or scientific papers. If the authors want that, then sure. This wouldn't fix the issues with big journals that demand it. But it still seems like a sensible idea to have copyright opt-in, not opt-out.

3
fjarlq 9 hours ago 0 replies      
Website overloaded... here's a cache.

http://webcache.googleusercontent.com/search?q=cache%3Ahttps...

"What books would be entering the public domain if we had the pre-1978 copyright laws?"

 Harper Lee, To Kill a Mockingbird John Updike, Rabbit, Run Joy Adamson, Born Free: A Lioness of Two Worlds William L. Shirer, The Rise and Fall of the Third Reich: A History of Nazi Germany Friedrich A. Hayek, The Constitution of Liberty Daniel Bell, The End of Ideology: On the Exhaustion of Political Ideas in the Fifties Arthur M. Schlesinger, Jr., The Politics of Upheaval: The Age of Roosevelt Dr. Seuss, Green Eggs and Ham and One Fish Two Fish Red Fish Blue Fish Scott ODell, Island of the Blue Dolphins John Barth, The Sot-Weed Factor Jean-Paul Sartre, Critique de la raison dialectique
"Consider the films and television shows from 1960 that would have become available this year."

 The Time Machine Psycho Spartacus Exodus The Apartment Inherit the Wind The Magnificent Seven Oceans 11 The Alamo The Andy Griffith Show (first episodes) The Flintstones (first episodes)
Also listed: songs from 1960 (e.g. Elvis's "It's Now or Never"), and copyrighted scientific research from 1960 still behind paywalls (e.g. 1960 papers on the structure of hemoglobin and myoglobin).

4
Aloha 11 hours ago 1 reply      
The bigger problem in my mind is all the orphaned works that will never be read, seen or enjoyed again because they are still locked behind the wall of copyright for owners who no longer care about them.
5
Goopplesoft 11 hours ago 1 reply      
Interestingly the Copyright Term Extension Act is also known as the "Mickey Mouse Protection act"

artlawjournal.com/mickey-mouse-keeps-changing-copyright-law/

6
starseeker 8 hours ago 1 reply      
The current copyright system is probably inevitable if you view monetization as the sole valid means of creation incentivization. There are also those who want to preserve the "integrity" of their work indefinitely and avoid any risk of what they would consider degrading or inappropriate reuse. If you come at this from either the monetization or the indefinite integrity camps, the public domain is entirely a negative proposition. This is unfortunate, since from a socitial standpoint there are many overall benefits to the public domain (renewal of interest in what would otherwise be lost works and lost effort, lowered barriers to entry for those needing older work on which to base new efforts, etc.) Unfortunately, most of those gains are also net negatives to those working within (and benefiting from) the current system, and consequently the public domain is unlikely to have advocates with the resources to sway the powers that be in its favor. I suppose a sufficiently broad and strong wave of public opinion might do the trick, but I don't know what the prospects of that are (I suspect dim in the short term, more difficult to calculate in the long term if copyright terms keep getting extended indefinitely.)
7
bootload 9 hours ago 0 replies      
Watching "The Silicon Valley of Hardware" with Bunnie Huang [0], it is obvious Shenzhen would have been a different place if the militarisation of intellectual property was enforced. So the question is the step from hardware hacking and manufacturing and sales hampered in the US?

[0] https://www.youtube.com/watch?v=SGJ5cZnoodY

8
protomyth 10 hours ago 1 reply      
I wonder if there is a compromise that can be done to at least get the works out there. I get the feeling the big fear isn't really that "Steamboat Willie" would be freely available, but that the character of Mickey Mouse would be.

It seems like an extension of trademark law with continued registration and payments on the characters would be a compromise that could break lose the original book / movie.

I would imagine many would hate to give up the ability to create new, original stories or do mashups, but we won't get that anyway.

9
EvanAnderson 7 hours ago 0 replies      
It's pleasing to know that long copyright terms provide incentive for dead authors to produce new works.
10
11
Hondor 7 hours ago 0 replies      
The TPP extends copyright terms in other countries. The new president opposes that deal. So there's a bit of hope.
12
chiaro 3 hours ago 0 replies      
It's a shockingly transparent corruption of power that these increases were applied retrospectively. The entire rationale for copyright is to incentivize the creation of new works, which explicitly does apply to past creations. This is nothing more than rent seeking.
13
visarga 5 hours ago 0 replies      
I think the effective status of copyright is being influenced by open source and creative commons. The more we publish in open source, the more value it accumulates, and that attracts new publishing. It grows like a black hole, eating everything (or like communism, if you asked Microsoft a few years ago). Just look of what remained of the copyright protected software market after 2000. Github is the new Library of Alexandria, not What.CD, because What.CD was only free as in beer, while Github is free as in freedom as well.
14
necessity 5 hours ago 0 replies      
IP is bs. Don't care for it, don't respect it. Don't expect government - the sole protector of it - to ever change anything regarding it for the better, just ignore it.
15
cooper12 10 hours ago 0 replies      
This comment was in reply to a now-deleted comment which ended with the statement: "The bigger problem is figuring out how to curate all of this which is more widely published than every before. Also, it's not crazy to think that today's copyright laws see some major revisions in the next few decades. The Internet has changed everything since just 1980 or so, and these are old laws that need reworking."

---

Unfortunately I'm not sure I can feel the same optimism. Copyright law has long been beholden to corporate interests. For them, it's only fine if it keeps getting expanded indefinitely. The more restrictive the terms, the more likely that others will have to license their works and the more they can bludgeon any offenders.

Your view is really tied to the present day, but what about all these works that aren't available on computers that need to be desperately digitized? I think we already have a natural curation mechanism: something popular and important gets reproduced in some way; it gets quoted, adapted, tweaked. Those are the works that need to be prioritized, but in doing so we might miss out on undiscovered gems or things whose importance will only be apparent in hindsight. Also, storage is cheap nowadays, we can easily just keep everything, index it, and let others sift through it later. (one example is the Internet Archive, we won't know what role these stored websites could play in a few decades like the Geocities archive)

16
geekamongus 11 hours ago 1 reply      
I love that this website is a throwback to the 90's, complete with table-based layouts and overlapping cells.
17
ginko 11 hours ago 4 replies      
Sure, it sucks that none of these properly entered the public domain by now, but who really cares if you pirate the Psycho, The Magnificent Seven, Elvis's It's Now or Never nowadays?
4
Tesla Rolling Out Autopilot Software Updates to 1,000 Cars bloomberg.com
87 points by JumpCrisscross  8 hours ago   65 comments top 4
1
marricks 7 hours ago 1 reply      
Headline seemed a bit vague to me, so what this update does is make cars released with gen 2 hardware inline with the autopilot features of gen 1 hardware. So lane assist, crash breaking, etc for cars released late last year.

Whole point of gen2 hardware is it should allow fully autonomous cars, even in rain which others likely wont be able to do, once the software is fully figured out. Next year will be exciting to see how quickly they can push updates towards that goal, as well as how quickly regulation can catch up.

2
JumpCrisscross 7 hours ago 4 replies      
Is a Tesla bought today designed to be software upgradeable to full autonomy?
3
dbg31415 4 hours ago 4 replies      
I saw someone driving a Model X in Austin this weekend, car still had dealer tags. He was shooting selfies and clearly documenting himself riding in a car that drives itself. I'm happy he's happy... but I have to say it was annoying to drive behind him.

The software clearly didn't follow the flow of traffic; it was rigidly locked in going the speed limit. That may be OK, but it was a very empty weekend and everyone was going 10-15 miles over the speed limit. So had quite a chain of cars backed up behind him and like a half-mile in front of him before the next car.

It was clear the Tesla software wasn't smart enough to close the gap by accelerating above the posted limits, or move him over to the right-hand lane where he wouldn't block traffic -- we all ended up having to pass him on the right.

I couldn't help but think, "Great... just what we need more of... simulated old people driving slow in the left-hand lanes and cars that encourage jackasses on their phones to take pictures instead of paying attention."

But... to be clear, I still want one. =P

4
NotThe1Pct 6 hours ago 1 reply      
BSOD is now literally a scary real possibility

It gives a new meaning to the word "crash"

5
Ask HN: Excluding WordPress, what is everyone's favorite for blogs/small stores?
40 points by travisby  1 hour ago   32 comments top 21
1
schappim 57 minutes ago 0 replies      
A number of our portfolio sites (including http://piaustralia.com.au ) are static html hosted on AWS S3 + Cloudfront.

The sites are created using Middleman[1], a ruby static site generator which I've found to be a little bit more flexible than Jekyll.

On our static sites, we grab inventory information as JSONP from a small Sinatra based service on Elastic Beanstalk with read only access to the DB. Other than this and the checkout (we'll get to that in a bit), everything is client side Javascript utilising local storage for the cart state.

We do not host our own checkout. Instead we use Shopify's ancient and way under-publicised "Cart Links"[2] feature. Cart Links let you pre-populate a cart and send the user to the checkout if you so wish.

To upload the static files to S3 we use an awesome program called S3_website which knows how to look for the rendered html from a number of static site generators, and sync it to S3. It's also smart enough to setup redirects, invalidate CDN caches and even gzipping content. It's freaking amazing[3].

[1] Middleman - https://middlemanapp.com

[2] Shopify Cart Links - https://help.shopify.com/themes/customization/cart/use-perma...

[3] S3_website - https://github.com/laurilehmijoki/s3_website

2
technion 1 hour ago 3 replies      
Things I hear CMS developers talking about: Constantly applying security updates. The lack of maintenance on their favourite plugins. Which caching plugins works the best. How important a CDN is. The need for security plugins, which themselves are often exploited. The HN/Reddit hug of death. Whether a host has suitable versions of PHP. "Webscale" being ten users a minute.

I don't get it. I've been using a static site built in Jekyll, which just works(tm). I recently rebuilt my blog with AMP compliance, and it still looks the way I want it to.

If you're not Ruby person, there's Hugo as a Go alternative. For blogs, we really should be seeing the end of maintenance, vulnerabilities, and static pages are cheaper to host.

Edit: of course, I answered the blog question, but not the ecommerce one.

3
darylteo 6 minutes ago 0 replies      
As in the previous thread, I'll just cautiously mention OctoberCMS. I don't know if they will do the job, I'm evaluating it for my client projects for project briefs starting with "we'd like a CMS... "

It is just as Open Source. However, it's newer, but has just hit stable recently. A strong plugin system. Purposed built for CMS from the ground up, built on Laravel with all the nice things.

4
antileet 7 minutes ago 0 replies      
I use https://getgrav.org/ and this is why:

- No DB. Use a flat file layout similar to Jekyll or static site generators.

- Offers dynamic features like redirecting and custom routing when you need it. This isn't possible with a pure static site generator.

- Decent optional panel to write, edit and manage almost all aspects of your site.

- Quite fast once you set it up with good caching.

5
bharani_m 1 hour ago 1 reply      
Ghost + Gumroad (selling products) would be a decent alternative.

Other options are to go with static site generators like Middleman or Hugo for your blog and setting up a shop on Shopify or Sellfy.

As a side note, I have open sourced the code [1] for my shop/blog that is running at https://www.authenticpixels.com. It is written in Elixir/Phoenix.

[1] https://github.com/authentic-pixels/ex-shop

6
latteperday 31 minutes ago 0 replies      
If it's for you, probably hand code. If it's for a client, try https://pulsecms.com It allows you to create static sites with a CMS layer on top that clients can edit easily. Static sites all the way baby!
7
fdik 34 minutes ago 0 replies      
Have my own blog software, YBlog: http://blog.fdik.org/yblog2.tar.bz2 Probably it's not feasable for most people, because it is heavily relying on YML2 http://fdik.org/yml and vim http://vim.org

For CMS I really can recommend http://mezzanine.jupo.org/

It also includes a Blog, is in Python/Django and is fulfilling all needs from very small to very large sites.

8
JoshTriplett 37 minutes ago 0 replies      
For blogs and personal sites, I'd suggest a static site generator such as ikiwiki (for self-hosting), Branchable (hosted), or Github Pages (hosted, but no SSL support for custom domains).

In addition to SSL support, ikiwiki offers several other useful features for constructing a blog or news site, such as "take all the pages under blog/* and emit a page with the last 10 in reverse order, including an RSS feed".

9
danieldk 1 hour ago 0 replies      
Jekyll/Hakyll for blogs. They are both very customizable, you can write Markdown, and since they produce static pages you have one security worry less.
10
sharmi 46 minutes ago 0 replies      
I use python based Nikola, a static blog generator. This i s for my personal blog. The killer features for me are using jupyter to write blog posts and custom urls for posts (which seem to be missing from pelican, the only other blog generator to support jupyter). The documentation is good for the usual tasks. In the extremely rare case when the documentation is not sufficient, the code is really easy to understand.
11
altharaz 1 hour ago 0 replies      
It depends on your skills/needs.

Do you want to customize everything with ease? Do you have someone who can maintain the security of your blog and eventually add new features later?=> WordPress on your own hosting

Do you want to use a very classical theme? Do you want a complete service with no skills required and no maintenance?=> Wix or WordPress.com

Do you want to customize everything? Do you have someone to maintain the blog on a regular base?=> Jekyll on GitHub or on your own hosting

Shall you need to sell things online with the blog, WordPress can be augmented with Shopify, meanwhile Wix has already this feature.

12
mrswag 1 hour ago 3 replies      
I generate static webpages from markdown in a < 100 lines bash script. It's just a for loop using sed, pygments and markdown, hosted on github.

It has a local webserver, spell check, optional image compression, and minimal dependencies.

I don't get the need of Jekyll or Hugo. They're bloated and it's a pain to customize so called "themes". I'm OK with 'boring' HTML and CSS.

13
Giorgi 31 minutes ago 0 replies      
There is none, that's why everyone it trying but failing to use static page generators, which are nowhere near the WP functionality
14
xiaoma 1 hour ago 0 replies      
Squarespace is the clear option. It's even faster to get set up with than WP and in the past 2 years has become a truly viable competitor.
15
iamslash 1 hour ago 0 replies      
I think jekyll is the best thing to blog something. And I recommend to hosting it on github for free.
16
sgdesign 1 hour ago 0 replies      
If this is for a client that can't manage a static site themselves, then I would probably look into splitting out the content editing part. For example services like https://www.datocms.com/ let you publish to a separate static site generator.
17
richie5um 41 minutes ago 0 replies      
I use Hugo for static blog generation and GitHub pages for hosting. Works well, and, apart from the domain registration, is free.
18
zakki 32 minutes ago 0 replies      
Have your tried prestashop?
19
bbcbasic 1 hour ago 0 replies      
Github pages. Free hosting. Just fork a Jekyll template and off you go.
20
dbg31415 1 hour ago 0 replies      
If you already think we're going to say WordPress, what do you want to do that WordPress doesn't do?

I don't like WordPress for eCommerce, but I think it's great for blogs and content sites.

For eCommerce... just too many variables. Who you want to use for fulfillment, what other systems you want to integrate with, if you need a staging instance or customer loyalty software or any of the 50 other things you can integrate.

For content... Ghost is OK. Just... WordPress has thought of everything already. Plugins, solid UX, extra features you didn't know to ask for... it's hard for other platforms to catch up.

21
milankragujevic 1 hour ago 0 replies      
Custom CMS in PHP. That's what I always use for everything.
6
Recreating 3D renderings in real life skrekkogle.com
157 points by mef  11 hours ago   16 comments top 7
1
Mizza 1 hour ago 0 replies      
Some of their other projects are pretty cool too: http://skrekkogle.com/solitaire.html
2
cdevs 6 hours ago 1 reply      
Funny - I woke up this morning and found this majoras mask Zelda tribute https://youtu.be/vbMQfaG6lo8 and then the behind the scenes and was suprised it was cgi on top of on location forest video that ends up looking rendered. Sometimes reality looks so good it looks fake so this was a interesting spin.
3
santaclaus 8 hours ago 0 replies      
No mention of the Cornell box, the OG real life render!
4
Trombone12 8 hours ago 1 reply      
I find it very upsetting that they even faked the block of wood!
5
mholt 9 hours ago 2 replies      
It's amazing how much effort we need to go to make rendered scenes look realistic and real scenes look rendered.
6
colordrops 7 hours ago 0 replies      
Seems almost like cheating to use a 3D printer to make some of the parts.
7
bwang29 9 hours ago 2 replies      
I've observed hardware design is getting flatter in the last couple of years (thinkpad, MacBook, even IKEA furniture). Guess it has something to do with the change in software UI language.
7
Echo: Print the first positional argument (written in ASM) github.com
27 points by activatedgeek  5 hours ago   16 comments top 7
1
aiur3la 2 hours ago 2 replies      
Serious question: why is this on HN front page? Am I missing something?
2
jamesfisher 1 hour ago 1 reply      
The instruction `repne scasb`[1] stood out. `repne X` means "while (not equal) { X; }". How is `repne` implemented? Is `repne scasb` assembly shorthand for a `scasb` then a `jne`? Or is `repne` some fancy higher-order instruction which takes another instruction as its argument?

[1]: https://github.com/kelseyhightower/echo/blob/53d84ea4e79db3d...

3
whym 2 hours ago 0 replies      
The source code includes this notice: "Copyright 2017 Google Inc. All Rights Reserved."

I wonder if the couple of dozens of lines of assembly code could be trivial enough to be public domain. Assuming a straightforward implementation, surely there is far less freedom in expressing the simplest version of the echo program in ASM compared to, say, C?

4
bdcravens 4 hours ago 2 replies      
I'm curious as to the why. Kubernetes doesn't keep Kelsey busy enough? :-)
5
wfunction 3 hours ago 0 replies      
Doesn't echo print all the arguments?
6
tlholaday 2 hours ago 1 reply      
echo returns nonzero if it cannot write ...

touch foo.txt; chmod 400 foo.txt; echo ouch > foo.txt; echo $?

.., but it appears this asm returns zero always.

7
andreiw 3 hours ago 0 replies      
Now rewrite in IR =))
8
How I Write Tests nelhage.com
245 points by henrik_w  15 hours ago   89 comments top 18
1
xivusr 12 hours ago 4 replies      
Excellent article and it's perfect timing for me!

In the past while contracting I was usually asked to include in my proposals estimates for tests.

The tests failed to be useful, simply because they were written after the feature was actually implemented! We knew better, of course, but this was done in order to quickly get builds a client could look at.

Then when clients wanted to add/change features guess what got cut to make up for the time? Thats right, the tests!

So the tests were always secondary, and the projects tended to suffer as a result.

Recurring "fixed bugs" cost more than just working hours to fix.In the eyes of a client or customer, they are so much worse than a shiny new bug.Tests can help catch recurring bugs before a client/customer does - and save you not only time,but from losing your customers confidence.

Now, I'm building my own app and I'm using a diciplined TDD approach. I didn't start my project this way as It seemed overkill when it was just me. But I saw early on that to not practice TDD even solo was madness.It is taking longer, but my actual progress is consistent, and I'm already far more confident about the stability of the app.

2
mberning 13 hours ago 3 replies      
Interesting post. I have found myself doing a lot of the same things through my own experience. One thing I have been striving for lately is to build functionality using relatively more small methods that accept paramters as opposed to using state which is stored in instance variables. I find that this makes my life easier when writing tests and also helps identify corner and edge cases that I may not have thought about. And when something does break it is usually very easy to add a failing test, fix the code, and see that everything is now working. Also makes me a lot more confident when I go to refactor. Sandi Metz gave a great talk on the gilded rose problem that explores these concepts.
3
dvirsky 3 hours ago 3 replies      
I don't like to write tests ahead of code, but the idea of "Avoid running main" is very powerful and something that helps me a lot. Usually in a new project I try to just use tests as the playground for the evolving code, and delay actually creating a working application for as long as I can (not so hard in non UI apps). In an existing project you delay integrating your new module with the whole app.

Sometimes my tests just start as a bunch of prints to see the results visually. Then when I'm happy with the results I convert these prints to assertions and the playground becomes a real test suite.

4
gregorburger 2 hours ago 3 replies      
Has anybody experience with testing code that produces graphics (e.g. 3D engines, etc.)? I saw some articles stating that mocking the API is a good approach. But how can you test shaders etc. Tests based on image comparisons seem very cumbersome. We currently rely heavily on our QA which does automated integration tests based on image comparisons. But there is no immediate feedback for developers with this approach.
5
mooreds 1 hour ago 0 replies      
Nice to see a non religious post about testing.

Particularly enjoyed the emphasis on regressions. I converted to testing when working on a relatively complex data transformation. This was replacing an existing, scary data transformation process that was hard to test (we'd run new code for a few days and do a lot of manual examination), so I made extra certain to design the new system so it was testable. Catching regressions in test, especially for data processing, is just so much better than catching and repairing them in production.

6
aisofteng 5 hours ago 0 replies      
This post is timely for me because it happens to encapsulate the way I've found a balance recently between test driven development and "test after writing" development that seems to be very effective for me.

As the article notes, writing all tests first is unreasonable because you won't know all implementation details until you have to make them; the tests I write first are thus functional tests, nowadays with cucumber.

Writing tests after coding is lacking, philosophically, because you often spend your time defining the abstractions and then just rewriting a verification of that abstraction in tests, plus some null checks.

The balance I've been using has been to write tests for abstractions I come up with, one my one. If an abstraction is decoupled and encapsulated, the unit tests come naturally. If i have to write a lot of mocks for an abstraction, that often tells me it isn't cleanly decoupled or simplified.

Furthermore, as you write tests as you go this way, you often find yourself writing the same support code more than once, at which point you notice it and find abstractions in that support code; this ends up explicitly giving you a view of what conscious and subconscious assumptions you have about what inputs you are expecting and what assumptions you have made. This is often enlightening.

7
napo 13 hours ago 1 reply      
> I fully subscribe to the definition of legacy code as code without an automated test suite.

> Ive never really subscribed to any of the test-driven-development manifestos or practices that Ive encountered.

I feel the exact opposite. I've worked in project with a lot of legacy code, both with BDD or with UTs that we added latter on.

Even with the best intentions the latter always failed: we always ended up doing a lot of unreadable tests that had no meaning and that we were afraid to look at.However, when I was working in a team fully committed on BDD, we were looking at the tests before looking at the code, the tests were in the center of the developing process, and we were able to write fast, solid, and simple tests.

Nowadays, I'm more interested on articles that understand that tests can be a pain too. And tbh I don't really trust articles that aim at a high coverage without talking about the different challenges that comes with tests.

8
KennyCason 1 hour ago 1 reply      
The writing a module and it's tests together, and doing them both at the same time is some of my #1 advice. If you're having to run main while developing, I consider something to be a bit odd.

I also find this a much more favorable approach than pure TDD. In my opinion, This method is easier to "sell" to other developers.

9
z3t4 2 hours ago 1 reply      
I find that bugs occur when you do not fully understand all possible state combinations and edge cases. So if that is the case I try to break it down to smaller units that are easier to comprehend. There will still be bugs though, but they are usually edge cases you didn't imagine would happen, and that's where I find testing useful, as the next person who touch the code probably will also miss that edge case.

1) Make changes2) Manually test & run automatic tests3) Write automatic tests for each problem/bug discovered4) Repeat

This only works for decoupled code though. If all units are coupled you must have automatic tests of everything as no-one can comprehend exponential complexity.

10
kisstheblade 4 hours ago 1 reply      
Does Linux have a comprehensive test suite (comparable eg. to SQLite)?I'm wondering because it seems to be quite bug free, and is a large project, and a kernel seems to be quite suitable for unit testing (compared to your typical CRUD app for example).

I suspect there's not much formal testing (at least done or required by Linus, some external projects may be available). So it seems that testing isn't that necessary for a quality project? On the other hand Linux has a large community so maybe that substitutes for a comprehensive test suite?

11
biggerfisch 5 hours ago 2 replies      
His comment about the "zoo" of data-driven tests made the way my university's major algorithms class did tests finally make sense. It's not a concept that particularly easy to search for when you're working from the command of "make tests with this filename format, 'test-<args>'", nor is it even something that strikes one as something that might be an actual design pattern (at least for me).

I do wish the reasoning had been explained to me far earlier as I might have been able to really recognize the testing as useful and not just another strange requirement.

12
tehwalrus 2 hours ago 1 reply      
I just took a peice of code with quite good test coverage, and stopped running main a couple of times during the "unit" test run. Coverage plumetted, and I realised how much of the code is still untested.

(The code was actually already structured for testing, I just hadn't written them because of that coverage number....)

I am still running main, by the way, but that's a different invocation called "system tests" which runs if unit tests pass (and after the coverage report).

13
petters 1 hour ago 0 replies      
I am certainly no religious follower of TDD, but I do think writing tests before code is useful.

The reason is simple: it tests your tests. I have many times found bugs in tests that made them always pass.

14
vinceguidry 13 hours ago 5 replies      
These days, I treat test code the same way as I treat application code, refactoring and cleaning up as I go. I've noticed that in most projects, unless you do this, there's a tendency to copy-paste tests, without any thought given to DRY.
15
amelius 12 hours ago 4 replies      
One important concept in testing is "code coverage". The technique is to (conceptually) place a unique print statement in every branch of every "IF" statement or loop (every basic block), and then try to write tests until you've triggered all of the print statements.

EDIT: This explains the concept, and gives a minimal approach to testing (i.e., you should test more than this, but at least this). Of course, there are tools to automate this, but not for every (new) language.

16
crdoconnor 12 hours ago 2 replies      
>My final, and perhaps more important, advice is to always write regression tests. Encode every single bug you find as a test, to ensure that youll notice if you ever encounter it again.

This is good advice.

On a previous (technical-debt ridden) project I did a little measuring and there was a pretty clear hierarchy of test value - in terms of detected regressions:

1) Tests written to invoke bugs.

2) Tests written before implementing the feature which makes them pass.

3) Tests written to cover "surprise" features (i.e. features written by a previous team that I never noticed existed until they broke or I spotted evidence of them in the code).

4) Tests written after implementing the feature.

5) Tests written just for the sake of increasing coverage.

Surprisingly 5 actually ended up being counter-productive most of the time - those tests detected very few bugs but still had a maintenance and runtime overhead.

17
aisofteng 2 hours ago 0 replies      
Having responded to several comments here, I am concerned about the fact that most of the discourse here seems to fail to completely understand what the goals are of unit testing - and, worse, many comments, despite this omission, seem to be made with an air of confidence which I could see myself, when I was a junior developer, accepting as reliable, because of that tone. As of this writing, I feel that anyone new to unit testing that comes across this overall discussion will be sent down the wrong path and may not realize it for a very long time, and so I feel that it is important to outline what I feel are the most serious misconceptions about unit testing I see here.

* Code coverage's value: code coverage is not a goal in and of itself. Seeing 100% code coverage should not make you feel comfortable, as a statistic, that there is adequate testing. If you have 100% coverage of branching, you might have indeed verified that the written code functions as intended in response to at least some possible inputs, but you have not verified that all necessary tests have been written - indeed, you cannot know this from this simple metric. To give a concrete example: if I write one test that tests only a good input to a single function in which I have forgotten a necessary null check, I will have 100% code coverage of that function, but I will not have 100% behavioral coverage - which brings me to the following point.

* What to think about when unit testing a function, or how to conceptualize the purpose of a unit test: unit tests should test behavior of code, so simply writing a unit test that calls a function with good input and verifies that no error is not in the correct spirit of testing. Several unit tests should call the same function, each with various cases of good and bad input - null pointer, empty list, list of bogus values, list of good values, and so on. Some sets of similar inputs reasonably can be grouped into one bigger unit test, given that their assert statements are each on their own line so as to be easily identifiable from error output, but there should nevertheless be a set of unit tests that cover all possible inputs and desired behaviors.

* Unit test scope: A commenter I responded to in another thread had given criticism along the lines of that by making two unit tests which test cases A and B entirely independent, you fail to test the case "A and B". This is a misunderstanding of what the scope of a unit test should be in order to be a good unit test - which, incidentally goes along with misunderstanding the intent of a unit test. A unit test, conceptually, should check that the behavior of one piece of functional code under one specific condition is as intended or expected. The scope of a unit test should be the smallest scope a test can without being trivial; we write unit tests this way so that a code change later that introduces a bug will hopefully not only be caught, but be caught with the most specificity possible - test failures should the engineer a story along the lines of "_this_ code path behaved incorrectly when called with _this_ input, and the error occurs on _this_ line". More complex behavior, of the sort of "if A and B", is an integration test; integration tests are the tool that has been developed to verify more complex behavior. If you find yourself writing a unit test that is testing the interaction of multiple variables, you should pause to consider whether you should not move the code you are writing into an integration test, and write two new, smaller unit tests, each of which verifies behavior of each input independent of another.

* Applying DRY to test setup: if you abstract away test setups, you are working against the express intention of each unit test being able to catch one specific failure case, independently of other tests. Furthermore, you are introducing the possibility of systematic errors in your application in the _very possible_ case of inserting an error in the abstractions you have identified in your test setup! Furthermore, f you find yourself setting up the same test data in many places, that should not suggest to you to abstract away the test setup - to you, it should rather hint at what is likely a poor separation of concerns and/or insufficient decoupling in your software's design. If you are duplicating test code, check whether you have failed to apply the DRY principle in your application's code - don't try to apply it to the test code.

And, in my opinion, the most important and common misconception I see here, and I really feel that it should be more widely understood - and, in fact, that many problems with legacy code will likely largely stop occurring if this mindset becomes widespread:

* Why do we write unit tests?

We write unit tests to verify the behavior of written code with respect to various inputs, yes. But that is only the mechanics of writing unit tests, and I fear that that is what most people think is the sole function of unit tests; behind the mechanics of a method there should be a philosophy, and there is.

Unit tests actually serve a potentially (subjectively, I would say "perhaps almost always") far more vital purpose, in the long term: when an engineer writes unit tests to verify behavior of the code he has written, he is, in fact, writing down an explicit demonstration what he intended the program to _do_; that is, he is, in a way, leaving a record of the design goals and considerations of the software.

(Slight aside: in my opinion, being a good software engineer does _not_ mean you write a clever solution to a problem and move on forever; rather, it means that you decompose the problem into its simplest useful components and then use those components to implement a solution to the problem at hand whose structure is clear by design and is easy for others to read and understand. It further means (or should mean) that you then implement not only verification of the functionality you had in mind and its robustness to invalid inputs which you cannot guarantee will never arrive, but also implement in such a way that it indicates what your design considerations were but serves as a guard against a change that unknowingly contradicts these considerations as a result of a change made by someone else (or yourself!) at a later time.

Later, when the code must be revisited, altered, or fixed, such unit tests, if well-written, immediately communicate what the intended behavior of the code is, in a way that cannot be as clearly (or even necessarily, almost definitely not immediately) inferred from reading the source code.

In summary, these are the main points that stuck out to me in the conversations here; I do want to emphasize that the last point above is, in my opinion, the most glaring omission here, because it is an overall mindset rather than a particular consideration.

18
nickpsecurity 11 hours ago 0 replies      
People into testing everything should also remember there's test generation tools in commercial and FOSS space to reduce work necessary to do this. Here's two examples for LLVM and Java respectively. Including the KLEE PDF since the results in the abstract are pretty amazing.

https://klee.github.io/

https://www.doc.ic.ac.uk/~cristic/papers/klee-osdi-08.pdf

http://babelfish.arc.nasa.gov/trac/jpf/wiki/projects/jpf-sym...

9
IR is better than assembly (2013) popcount.org
122 points by oherrala  13 hours ago   46 comments top 14
1
sebastianconcpt 7 minutes ago 0 replies      
I don't have projects that fits on this, but sounds like a no brainer. Abstracting the specifics and keeping the timeless is a beautiful move!
2
jimmyswimmy 4 hours ago 1 reply      
This is neat, I had no idea there was an intermediate language like this which is cross platform. It would seem that I could decompile binaries using llvm tooling and then recompile for other platforms.

http://stackoverflow.com/questions/6981810/translation-of-ma...

Obviously not cross os, but might be good for bare metal stuff. I've gotten libraries in the past compiled with weird ABIs. This sounds really neat.

3
amelius 10 hours ago 2 replies      
> If you really can write better assembly than LLVM, please: Don't write any more assembler by hand - write IR and create new LLVM optimisers instead. It'll benefit everybody, not only you. Think about it - you won't need to write the same optimisations all over again on your next project!

Very noble goal, but I can imagine that it can take a lot more time to do that than just writing a bunch of assembler instructions.

Perhaps there could be some intermediate approach, where LLVM can learn from a IR/assembly pair and improve itself (?)

4
ori_b 10 hours ago 0 replies      
If I wanted to do this sort of thing, I'd probably use either intrinsics or C directly -- the compiler is already good at dealing with both, and will probably do a better job than LLVM IR.

The biggest reasons to drop to assembly is because there are high level constructs that the compiler is very unlikely to recognize and optimize effectively. Things like AES-NI, hardware RNGs, and similar.

5
greglindahl 12 hours ago 1 reply      
The Open64 / PathScale compiler suite has had intrinsics written in the IR (WHIRL) for a long time. WHIRL is stable enough that it's not a maintenance problem. Being written in IR means that the full power of inlining, function specialization, etc etc will be used, even if whole-program optimization isn't being used.
6
lacampbell 13 hours ago 3 replies      
Most of the guides I've seen for LLVM recommend you use the LLVM libs to generate the IR. Why? I feel like it would be much easier to generate the IR directly like the author has done.

It also wouldn't tie me to any particular library - I think the only actively maintained one is the C++ one.

7
rurban 12 hours ago 0 replies      
8
imtringued 1 hour ago 0 replies      
LLVM IR is not really suitable because of compatibility. Someone should create a portable assembler on top of LLVM instead.
9
mahdix 4 hours ago 1 reply      
This may be a little off-topic but does anyone know a good and up-to-date tutorial for using LLVM in C language?
10
andreiw 13 hours ago 5 replies      
The other thought I had here, is that AFAICT IR is not a standard. There is no requirement that it remains compatible in 50 years or 5 months. There is no standard IR, and shouldn't be, as that would become an impediment to compiler evolution and fit/optimization to newer architectures.

Doesn't AS/400 use an IR approach as well? Which let IBM seamlessly migrate the underlying CPU a few(?) times now?

11
nickpsecurity 12 hours ago 1 reply      
It's interesting since I proposed using LLVM in place of inline assembly a while back. I got this counter point when I asked on ESR's blog:

http://lists.llvm.org/pipermail/llvm-dev/2011-October/043724...

Any LLVM experts have thoughts on this or my original goal within the context of LLVM's current situation?

12
andreiw 13 hours ago 1 reply      
In this way, IR would fullfil the same role Macro-32/64 did for porting VMS to Alpha and beyond. However, it appears to my understanding (sorry, I was still crawling when VAXes were on the way out), that the benefit was retaining "VAX" syntax to avoid massive rewrites.

If you're starting from a clean slate, what's the benefit of writing IR? Why not use C? After all, IR won't really give you complete control over generated code, and it's still an abstract VM (albeit that obviously allows writing IR that will only sensibly compile on a specific arch - e.g. system register acceses and so on).

13
SFJulie 11 hours ago 1 reply      
Anyone tell the LLVM team that the Babel tower is a myth and that it ends bad?

Some CPU have specific idioms that are not only hard to translate but requires to be used fluently. Like natural language.

Btw, I never uses any software relying on a name of a myth that was a pure failure such as Babel or death star. It makes me feel like people intend to fail.

14
faragon 2 hours ago 1 reply      
TL;DR: vendor lock-in.
10
TiDB A Distributed SQL Database github.com
82 points by the_duke  8 hours ago   20 comments top 4
1
yogthos 6 hours ago 2 replies      
Wonder how this compares to ActorDB http://www.actordb.com/
2
lobster_johnson 4 hours ago 0 replies      
3
andrewchambers 6 hours ago 2 replies      
This very similar to cockroachdb, only mysql compatible instead of postgres compatible.
4
devty 4 hours ago 2 replies      
Has go language become a go-to language to build SQL frontend for distributed databases? If so, why is that?
11
Five Year Mission simblob.blogspot.com
66 points by guiambros  9 hours ago   14 comments top 5
1
rl3 2 hours ago 0 replies      
This is incredible work. Bret Victor's as well.

Somehow I'd never seen Amit's work before, and Bret's only very recentlydespite being inadvertently bitten by the same bug as both of them a couple years back. Granted, a conservative estimate puts either of them as being about five hundred times more productive than myself.

Bret's work in particular is humbling. The "explorable explanations" concept was something I'd given a lot of thought to, and it turns out Bret had dedicated an entire article to elucidating it back in 2011, years prior.

Perhaps the greatest irony of being obsessed with accelerated learning is that while you're trying to build the tools or technology to enable it, you find yourself wanting the very thing you're building. e.g. "I could build my magic learning computer much more quickly if only I had a magic learning computer!" While this is frustrating, it at least serves as constant validation, as you try to force yourself to pay attention to some dry, overly-verbose reference on a particular subject.

2
bagrow 7 hours ago 1 reply      
This guy's stuff is phenomenal. I regularly consulted his hexagonal grid guide when doing certain simulations. Hope he doesn't give up completely.
3
forrestthewoods 1 hour ago 0 replies      
In confused. Is the blog Amit's full-time job? I thought Red Blob Games shipped games and this was something they did on the side.

I love the blog. I love every time a new post comes out. But this reads as of 2016 was a total failure where nothing was produced and no goals achieved.

How did Amit live over the past 5 years? Did he have no income? Did he do non-developer work making Red Blob Games just a part time endeavor?

I'm confused. :(

4
mathattack 5 hours ago 0 replies      
Keep it up, and get a new mission! :-)
5
obstinate 6 hours ago 3 replies      
This stuff is very impressive, but why is there nothing in here about revenue? I would be really worried if I spent five years on something(s) that didn't significantly improve my financial security. I'm sure not everyone shares my priorities, but I'd guess this one is fairly common. Maybe this individual is already set on that front and I'm simply not aware of the context, but his concerns about the stability of the gaming market suggest that this is not the case.
12
Ask HN: Why are flash memory prices going down so much faster than RAM?
118 points by altoz  7 hours ago   39 comments top 15
1
keenerd 6 hours ago 2 replies      
The obvious answer: flash can hold multiple bits per cell and ram can't.

MLC is half as expensive as SLC. TLC is 33% less expensive than MLC. QLC is 25% less expensive than TLC and 75% cheaper than SLC. Not to mention transparent compression algos. As the controllers improve you can get more bits of storage from the same amount of flash for free. Longevity and reliability suffers, but hey, cheap SSDs!

Ram only gets cheaper by improvements to semiconductor processes, which also can be applied to make flash cheaper. (Big fat asterisk, those processes are very different.) While improvements to flash that allow more levels per cell can't be applied to ram. The price difference between flash and ram will only continue to grow.

Modern flash is quite "analog". The first company to figure out how to reliably store 32 voltage levels per cell (Five bits. PLC?) will make a quick billion.

2
astrodust 3 hours ago 2 replies      
DRAM type memory uses a completely different process than flash even if they're both a form of "memory". The performance of DDR-type memory is well beyond anything in the Flash world.

Today 2GB/s is considered very good for an SSD but that would be brutally slow for system memory. DDR4 memory is typically 30-60GB/s per bank with the low end being two-channel, the high end being four.

DRAM has also been the subject of aggressive research and development for many, many decades while large-scale production of flash is a relatively recent phenomenon. It's the widespread adoption of smart phones, thinner notebooks, and ubiquitous USB keychain type devices that as pushed it to the volumes it's at now.

There's also the concern that DDR memory must have a very high level of data integrity, bit-flip errors are severely problematic, and it can't wear out even after trillions of cycles. Flash has more pervasive error correction, and while wear is a minor concern, it's still possible to exhaust it if you really, really try.

I'd say the reason flash memory prices are steeply down is the new "3D" process used by Intel and Samsung has been a big game-changer, allowing for much higher density. DRAM has seen more gradual evolution through the last few generations.

3
paws 6 hours ago 2 replies      
For some historical context this is worth remembering:

https://en.wikipedia.org/wiki/DRAM_price_fixing

4
CoolGuySteve 2 hours ago 0 replies      
Slow SSDs have gone down in price, but fast SSDs are still expensive. For example, you can get a 500GB Samsung 850 for about $130 but a Samsung 960 evo costs $250, and then another $100 on top of that for the 960 pro. Those 3 drives range from 600MB/sec to 2222MB/sec linear reads, the fastest costing the same as a 600MB/sec SSD did 3 years ago.

The demand for slow RAM drops precipitously after the whatever Intel chipsets use it stop being used in new systems (not sure if the same is true in the embedded market). For example, nobody's buying DDR2 these days. So the economies of scale dissipate and fabs retool faster.

So while both devices have economies of scale, SSDs have an extra dimension to their demand curve for performance that allows for slower higher density chips to still be profitable.

5
slededit 5 hours ago 0 replies      
RAM is only capable of storing one bit per cell. FLASH didn't really take off until MLC technology came around giving the ability to store multiple bits per cell which vastly increased the density.

Theoretically RAM could be built that way but it would be much slower. Every cell read/write would need to go through an ADC/DAC, and the noise is much higher due to leakage. This slowness isn't much of a problem for FLASH because its competition was spinning disks that were slow as molasses anyways.

6
zeta0134 6 hours ago 0 replies      
If I had to wager an uneducated guess, I would propose that the recent mainstream acceptance of Solid State Drives as a viable, affordable alternative to spinning disk drives, has created a sudden demand in Flash memory that's caused that industry to thrive.

At least at the retail level in the Best Buy where I worked until recently, I watched Solid State drives transition from something only high end computers had to something that was standard even among the lower priced value machines. We had customers complaining about the smaller drive sizes because they were so accustomed to the gigantic storage offered by the spinning disk media at its height in popularity.

I'd love someone with more industry knowledge to chime in though, as my own experience here is pretty limited. This is simply what I've observed in my own corner of the world.

7
candiodari 6 hours ago 0 replies      
I think this works by providing a second application for older chip manufacturing facilities. For SD chip designs, speed and size effectively do not matter (the controller will matter a hell of a lot more for final speed than actual storage chips speed). So they're using the chip fabs that everyone else is abandoning.

As a second bonus, even on old systems SD card circuits are relatively small (compared to a 5-60" LCD they certainly are). Wafers are round and old wafers are used to manufacture LCD displays, so small chips can be placed around them in the manufacturing process and get really good economics by having lots of manufacturing options.

So same reasons displays are getting cheap, except they're even better. So the race to the bottom is happening pretty fast for SD cards.

Not entirely sure about this. Might be entirely wrong, but I'm not sure how to confirm this.

8
haberman 4 hours ago 2 replies      
A question that seems related: what the heck is up with memristors? The Wikipedia page (https://en.wikipedia.org/wiki/Memristor) says that memristors are estimated for commercial viability in 2018, and have been built in prototype, but also says that there are serious doubts about whether memristors can possibly exist in physical reality! What gives?
9
ksec 5 hours ago 0 replies      
1. There little demand for more memory.2. There are only a few Memory makers left on the market.3. Moore's Law no longer applicable, smaller transistor isn't necessarily cheaper any more.4. You can have a Bad NAND, you dont want a Bad memory.

China has decided to pour in 10s of Billions into the NAND and DRAM industry by 2020, until then the price should very much stable / predictable.

10
static_noise 2 hours ago 1 reply      
After reading through the answers here, I don't think the real answer has been given.

Is it the technology?

* Flash cells can store more data an be produced cheaper per cell. But they are more complex to read out and slower.

This can explain some factor, but the factor of 40 given by OP probably not.

* Flash and DRAM probably use different processes.

This could explain a bit but look at the next point...

* DRAM has a much longer history and (at least in the beginning) much higher capital investment.

...which means that DRAM should have the technological advantage. At least through economies of scale.

Is the cumulative investment in flash research already much bigger compared to DRAM research?

Is the process used to produce flash memory so much easier?

Is it the market?

* Obviously people pay the price.

* With DRAM people are hungry for performance more than they are for size.

* We already have more than enough DRAM. The latest MacBookPro demonstrates that 16 GB DRAM is enough for just about everybody but flash storage goes up to 1 TB.

* Of those 16 GB DRAM the speed and power consumption are much more important than the raw size.

Coming back to the cumulative investment. I think that the primary pain point for flash has been the price per GB. Flash could be stronger, faster, more reliable, less power consuming but those are all secondary. It is fast and reliable enough by using very complex RAID controllers. The power consumption is not as bad as HDDs already use a lot and the data mostly just sits around. The main driving point is the price per GB. This is where the money goes in flash development.

On the other hand for DRAM, after some point, it is mostly speed and power driven. Reliability has to be comparatively high as every cell must work over years. Size is mostly increased by improving semiconductor processes where flash probably uses a lot of the same technology. Using the layer stacking technologies of flash is probably not yet applicable because it is not reliable enough and not compatible with the cell layout, maybe it never will.

If we really were hungry for so much RAM we would probably get it. But we aren't. It's good enough. Progress slows down.

11
smitherfield 3 hours ago 0 replies      
Read/write speeds for DRAM are much faster than flash (although the gap is closing and the day may soon come where computers are sold with flash storage but no DRAM, and the distinction between memory and storage is done away with).

DRAM is also a much more mature technology than flash is, so more of the low-hanging fruit for improvement has already been taken advantage of.

12
deathhand 6 hours ago 0 replies      
RAM is written to faster than any other component other than the onboard processor cache.
13
eschutte2 6 hours ago 0 replies      
RAM is more expensive to produce in general (more transistors, more stringent specs as you mentioned), but as to rate of change, that seems more likely to be due to competitive pressures and maybe more rapidly increasing demand for flash in recent years versus RAM.
14
yuhong 6 hours ago 0 replies      
I think NAND flash is more flexible in terms of design than DRAM is, for example 3D NAND. NAND flash generally communicate through a separate controller that uses for example the SATA or USB buses, while the DRAM controller is built into the CPU or chipset.
15
nickpsecurity 6 hours ago 1 reply      
Looking at Hynix's financials, they're making enough money to reduce the cost of RAM quite a bit. Looks like it's just them maximizing their profit as one would expect. I assume it's similar for the rest. As always with for-profit firms selling hardware or I.P..
13
The moving sofa problem ucdavis.edu
515 points by vinnyglennon  17 hours ago   47 comments top 16
1
jmount 15 hours ago 2 replies      
In static analysis of forces you can in fact have an irreversible couch/sofa event (as in Dirk Gently's ). It is usually is described as jamming or wedging of the peg in hole problem (see here http://www.cs.cmu.edu/afs/cs/academic/class/16741-s07/www/ol... ) and arises when the implied forces can oppose any force to move the object. In this model you can stick a peg in at the wrong angle and it will jam and never come out.

I ran into this when I tried to explain the sofa stuck in the staircase mystery in Dirk Gently's Holistic Detective Agency. She (a Ph.D. in robotics specializing in dynamics and physics) pointed out an idealized rigid system could jam in this way without any additional exotic explanation (beyond the exoticness of idealized rigid physics).

2
johansch 14 hours ago 2 replies      
This reminds me quite a bit of the awesome Kuru Kuru Kururin game:

https://en.wikipedia.org/wiki/Kuru_Kuru_Kururin

Gameplay: https://www.youtube.com/watch?v=VYvUZXT_43k

3
Jaruzel 15 hours ago 2 replies      
When I first read dirk Gently shortly after it was published, and got to the Sofa bit, I really wanted a wireframe simulation of it as described in the book, but as screen-saver[1].

These days most if not all people no longer have screen savers, so that wish is likely to forever remain unfulfilled.

--

[1] I also wanted the rotating Starbug wireframe from Red Dwarf, but to my knowledge no-one has done that either.

4
soheil 9 hours ago 1 reply      
Must be said that the final shape looks very similar to animal feces probably because twisting intestines pose a challenge similar to that of the moving sofa problem.
5
mathgenius 5 hours ago 1 reply      
There is a similar problem, "Lebesgues Universal Covering Problem", also unsolved:

https://golem.ph.utexas.edu/category/2015/02/lebesgues_unive...

I wonder if the maths is related at all.

6
lisper 7 hours ago 1 reply      
I faced this problem in real life, not with a sofa, but with a bed mattress platform. Just out of grad school, my wife and I were moving in to an old craftsman-style house with a staircase that made a 180-degree bend at a landing with a fairly low ceiling. We squeezed the mattress through because it was bendable, but the platform was rigid and no matter what we did it just would not fit. Some measurement revealed that it would not go through the upstairs windows either. We ended up sawing the platform in half (it was made of wood covered in fabric) and re-assembling it upstairs. I screwed L-brakcets to the two halves and connected them with bolts so that we could easily repeat the process when it came time to move out.
7
bluedino 16 hours ago 5 replies      
Of course, in the real world, furniture is 3D, and the obstacles you move move furniture around are also 3D. Bannister that's 3 feet high, couches with curved arms, ceilings have heights, stairwells...

Part of the fun of moving is trying to figure out how to orientate furniture to get it into a room - or out of the room, since someone already got in there so of course it must come out.

8
spacehacker 14 hours ago 1 reply      
I am wondering what kind of solutions an evolutionary algorithm would come up with.
9
chrisallick 15 hours ago 1 reply      
If you like this, a student at SFPC in New York built a kinetic sculpture about this problem.
10
foxhop 9 hours ago 0 replies      
We should create flexible walls. Disrupt walls.
11
Cerium 8 hours ago 0 replies      
I showed this to my dad (a math teacher), he said: "When you get to the corner, tilt the sofa up, and then tilt down the other hallway."
12
xuva 16 hours ago 1 reply      
Perhaps coincidentally, the ambidextrous sofa has an area scarily close to the sum of inverse squares, \pi^2/6 \approx 1.6449341...
13
ben0x539 11 hours ago 1 reply      
Is my browser broken or does this page have a max-width on the content that's narrower than the animations?
14
amelius 11 hours ago 0 replies      
I wonder what shapes an automated heuristic approach (e.g. using genetic programming) would come up with.
15
masterponomo 8 hours ago 0 replies      
Clearly, the PIVOT point is the key.
16
Pica_soO 12 hours ago 2 replies      
What is the longest sofa you can move around in a Klein-Bottle?
14
Gitea A community-managed fork of Gogs gitea.io
144 points by ausjke  16 hours ago   63 comments top 13
1
alkonaut 58 minutes ago 0 replies      
Question to any gitea devs/users: How can I get rid of the (terrible) i18n?? I'm trying out the app but it's complete gibberish! It appears to do some kind of User Agent detected i18n - (which by the way an app should never do imho).

I just want english so I can have some kind of understanding of what I'm doing but I can't find the setting either in my user profile nor in the app configuration.

2
dqv 15 hours ago 0 replies      
It was forked because they wanted a different management model that included more people.

https://blog.gitea.io/2016/12/welcome-to-gitea/

3
zsj 3 hours ago 1 reply      
A community is not about how many people have permission to write the main repository. Linux kernel is the example. Linus is only one who has permission to write the mainline tree.
4
anondon 5 hours ago 0 replies      
My major issue with Gogs and Gitea is that they lack a cache to store the git history/log. So if you work with repos that have many commits (example, the git repo itself or the linux kernel repo), you will forever be waiting for the commit message for each file to load because Gogs/Gitea scans the entire git log for each file.

This was an issue when I tested Gogs a few months and I don't see any mention of a cache so I think it's still an issue.

For smaller repos though, Gogs works incredibly well.

Regarding this fork, it makes sense if the owner of gogs is not giving write access to others. At the same time it would be a shame if Gitea becomes popular and overshadows Gogs. I hope they can work out a mutually acceptable solution and merge.

5
vvarp 11 hours ago 2 replies      
As someone who's currently using gogs I'd like to know what are the primary differences between gogs and gitea - right now the website doesn't talk too much of benefits coming from the switch.

Sure, community managed sounds great, but does that automatically guarantee a solid project vision, predictable release cycle and lots of new features?

Don't want to sound negative but I think reasons for the fork needs to be clearly presented and potential switchers (like myself) got to be assured there's a better future with the fork rather than original

6
ausjke 14 hours ago 0 replies      
I played with gogs half-year ago and gitea is my first software try-out in 2017, it worked beautifully for pretty much everything I wanted and it requires way less cpu/mem to run(comparing to gitlab,etc), I'm sold.
7
kureikain 5 hours ago 1 reply      
His reply is very reasonable and some of Gitea dev is a bit of harsh in tone IMHO. Or maybe that's because of my English level.

Anyway, I think this is how OSS should be. We shouldn't have to force people to merge the code that we want. At the end of the day, it's his project anyway.

Good luck to both of projects.

8
mongrelion 14 hours ago 3 replies      
I wonder how long it's going to take before gogs and gitea get merged, just like it has happened in the past with major forks , nodejs + iojs being one example.
9
zyang 14 hours ago 3 replies      
Hosted on github...
10
pryelluw 14 hours ago 1 reply      
The gogs and gitea websites feature the same basic content on the front page. Is it also open source? Im curious and dont mean to stir up shit.
11
throw2016 8 hours ago 1 reply      
I think a fork should not be a casual decision as the main developer would have put in hundreds or thousands of hours on the project and was motivated to get it to this state. A project by a small team or even a single person will obviously be constrained in many ways.

Now if the project has potential or takes off a 'community' should fork it only if all other avenues have been exhausted and with good reasons.

Its important for a 'community fork' to let the community know the exact rationale or 'community' can easily becomes a excuse for some to seek to control or capitalise on others work. This does not help open source especially if the main developers are too busy developing while those who fork have time to market the fork to a community.

12
Sir_Cmpwn 13 hours ago 1 reply      
This is a pretty immature community IMO. The gogs maintainer just has life sometimes and they want to fork because he leaves some PRs open for a while. Learn some patience, the project will be better off for it.
13
madeofpalk 9 hours ago 1 reply      
Ahhh gogs. This project has always rubbed me the wrong way with how much of a blatant steal of Github it is.

Sure, 'copy' the features, and take inspiration from the UI and layout, but so much of gogs looks identical to GitHub that it's nothing but stealing.

15
Lenovo ThinkPad T460 A Good Linux Laptop for Development karussell.wordpress.com
108 points by karussell  3 hours ago   132 comments top 26
1
Insanity 1 minute ago 0 replies      
I have a Lenovo laptop at home that is quite old now (A dual-core machine that I used throughout university). I am not sure which one it is, but I have been running Ubuntu on it for quite some time now, and it is actually the only of my devices that has always worked perfectly with a vanilla ubuntu installation.

Recently running 16.04 on it as well, once again an update without any issues.

Whilst on my other machines (An HP laptop and custom destkop) I always had _some_ problems with Ubuntu or other flavours of Linux. The HP laptop had the movie player problem mentioned here in the post, and had some issues with running Skype webcam/voice-chat.

The desktop had an issue of freezing up randomly, and some audio issues at first.

Currently Ubuntu is running on all these machines, but the old Lenovo laptop was the only one that in all these years worked without any issues.

2
2bluesc 3 hours ago 4 replies      
I've owned Thinkpads in the past and almost bought the T460 last month, then I discovered the Dell XPS / Precision line and fell in love. I picked up a manufacturer refurbished XPS 15 on eBay and wound up swapping in a Dell Precision E3-1505M motherboard I stumbled across.

The line has Intel quad core CPUs, minimal bezel (my 15" is almost the same size as the 14" System 76 Galago Ultra Pro it's replacing), reasonably slim for a quad core, 84Wh battery, 10+ hours of low power dev (baseline power is about 5.25W on my 8GB + 1080p + Xeon E3-1505M machine) in (Arch) Linux? YES.

Oh yeah, and for nerd points the Dell Precision M5510 has the option for Intel Xeon and Ubuntu stock for people doing CPU intensive Linux work (in my case Linux embedded system builds that grind for tens of minutes to two hours).

To add icing on the cake, you can easily get parts (batteries, motherboards, etc) on eBay if you ever need to fix it yourself which is a sharp contrast to the non-existent System76 Galago Ultra Pro I picked up a few years ago after I ditched my last Thinkpad.

I keep looking at the Thinkpads, but they seem a generation behind.

3
TobbenTM 3 hours ago 3 replies      
And one of the most awesome things about Thinkpads not mentioned; you can get every (most?) replacement part directly from Lenovo. You can actually look up the part number in the service manual, order it, and replace it yourself. For nerds like us, this is sooo nice sometimes, when you just wanna get it fixed quickly, from the comfort of your own home.
4
endgame 2 hours ago 6 replies      
Lenovo are on my personal shitlist after superfish and abusing the windows platform binary table. When my current laptop dies (a thinkpad T440p that I'm reasonably happy with), I may have to suck up the performance hit and go to minifree for a machine I can actually trust.
5
karussell 3 hours ago 2 replies      
After I made a few comments here on HN about the T460, I felt I should condense all the stuff into a short blog post. Feel free to add your experience or alternate developer machines, with pros and cons.

What I missed at Dell and Apple is the possibility to configure your hardware a bit so that it better fits your needs. This was better for Dell when I purchased the Dell Latitude 7 years ago.

I did not choose an MBP because I feel safer with Linux in the long run. I heared that the security updates stop two years afterwards and the software upgrades makes the 'old' hardware a lot slower.

In the end every OS somehow sucks, but Linux sucks least.

6
aiur3la 2 hours ago 4 replies      
> ... I find the boot time compelling enough (~23sec until login, plus 2sec to open the browser) that I do not need this.

I think something is slowing down your boot, I get faster boot on a 2008 thinkpad running the same OS.

OT: systemd was supposed to improve boot performance but it has actually become much worse. Upstart on a weak chromebook boots in under 2 sec, why shouldn't your current generation thinkpad with a fast SSD match that?

7
herbst 3 hours ago 5 replies      
Ive went with a T420 recently, mostly because of the keyboard. But also because $400 for laptop + battery + Samsung SSD + 16GB hyperX Ram sounded so cheap i could not resist. And honestly even after a 2015 MBP it feels perfect for all my needs. In fact due to the superior RAM and SSD it feels often way faster than the MBP for 5 times that price felt. Plus it has way better battery life.

Seriously Thinkpads are the best dev laptops ever.

8
mentat2737 2 hours ago 0 replies      
I bought a T420s a few months ago.

I am selling my much newer MacBook Pro retina now, as the Thinkpad is so much more functional, the keyboard is amazing the feeling of the machine itself is fantastic.

I am thinking to buy a X260 brand new because I need a newer CPU and better battery life, but for sure I'll only buy Thinkpads or Latitudes (I have one at work, amazing machine) from now on.

9
giis 2 hours ago 0 replies      
My experience with Lenovo:

Personal Machine: (Ubuntu/CentOS with 1 or 2 VM running sometimes)For me: AMD Quad Core - A10 7300 , with 8GB DDR3 RAM and 1TB HDD (acer aspire e15) is perfect Linux development machine, it costs less than $500 . Unless you are running 3 or more VM or stuffs like high-end data processing using 16GB RAM for development is worthless.

Work machine: (Windows / Fedora-19 with 3VBox vm running most of the time)We (team of 7 members) received new Lenovo thinkpad in 2012, with 256SSD, 16GB ram, and i7 processor. Within 18months 3 or 4 of my friends faced hardware related-issues (suddenly stopped booting etc). Luckily mine survived until I left the company in 2015.

10
EdwinHoksberg 2 hours ago 2 replies      
I bought a T460p a few months ago and am not very happy with it. The problem I have is that linux(I am running debian) has quite a few problems with the Skylake architecture, especially the graphics driver.I tried everything I can think of, installing the intel driver manually and installing the newest kernel(4.9.0) but I still have some troubles with the graphics glitches.So I when I'll be buying a new laptop I will definitely avoid the Skylake arch, every other version I tried worked a lot better.
11
mathieuruellan 25 minutes ago 0 replies      
I bought a TP s540 3 years ago, and a few days after the end of the warranty, the motherboard died. Lenovo was ok to replace it for free. 3 months after, the motherboard died again and this time, no free replacement. The s540 was really cheaper, but without 3y warranty, without full mil-specs. If you want reliability, choose T/X series. 3Y warranty on site is already included, a proof they trust these products.If you add the optional warranty on cheaper products, the price is higher than equivalent on T series.
12
alanl 51 minutes ago 0 replies      
I believe redhat staff use thinkpads (t-series and x-series)), meaning support for Linux is pretty good out of the box, and you can find a fix/ workaround for most issues on a fedora forum.
13
aceperry 2 hours ago 2 replies      
I love these linux on laptop articles. I've used linux since Redhat 5.0 and have almost always had to configure things to get everything to work. Nowadays, I don't really have the time to dick around, I used Gentoo for a long time, so I rely on Ubuntu to make it a simple plug and play install. Even with most of today's laptops, ubuntu seems to play well compared to the bad old days. I find I can throw Ubuntu on any laptop and get working as soon as I put in a few customizations and tweaks. Really, in my mind, linux has come a long way. But it's great to see how easy it is to get linux up and going on most laptops today.
14
cornedor 3 hours ago 3 replies      
This week at CES the T470 will be announced. So if you're thinking about buying one you might want to wait for that
15
INTPenis 1 hour ago 0 replies      
My 2 cents about thinkpad, I have an x230 and am very happy with it.

3. Same resolution but smaller monitor and smaller overall size makes for easier traveling imo.

7. I have the exact same issue with hitting the touch pad when typing, but I've learned to go slower and avoid it.

8. I first ordered the x230 by accident with the larger battery and was amazed at the working time of 12-15 hours but it was also quite bulky. So I re-ordered with my missing keyboard backlight and with the slimmer battery and I'm quite happy with the slimmer form-factor while still having a good 6 hours of work time.

11. It's clearly not a media machine, it even lacks shortcut keys for pause/play media.

16
mightymaike 1 hour ago 0 replies      
T440S user here: The thing broke already 3 times in two years. The quality of Thinkpads is rapidly declining. I'm going to move away from Thinkpads after being satisfied for about 15 years. Also the moves Lenovo recently did is a blocker.
17
bbtn 2 hours ago 2 replies      
I have Lenovo Thinkpad X1 Carbon. At first, I thought I am buying an almost IBM quality laptop. Craftmanship is not good, it looks cheap, screen is flickering and there is a constant 20 kHz hiss from cpu fan. Battery does not long last as it was promised (they had ads: battery life longer than macbook air's). Touchpad is not very responsive. And finally: another new type of usb-size adapter port, I have not seen anywhere else.
18
ekianjo 3 hours ago 2 replies      
Did they make a 460s version of the 460 this time around?
19
dajonker 2 hours ago 1 reply      
I have a X240 (12.5" version of the T440) for about 2.5 years now, and while indeed being super quiet when it's new, today the fan is almost always on, even if it's only on the lowest speed. Probably need to open it up and give it a good cleaning, and/or replace the thermal paste between the CPU and cooler.
20
eltoozero 2 hours ago 2 replies      
The x201 is a 12" ThinkPad model with i5 2.5 ghz, unfortunately only DDR2 but jam in an SSD and it's a crazy bargain for ~$100 on eBay.

Coreboot support too if that's your kind of thing.

Stay away from the tablet x201, forget that noise.

21
middleclick 2 hours ago 0 replies      
I have the T450s, and it's a great laptop for sure. But as someone who has owned laptops from the previous IBM ThinkPad series, there is a noticeably deterioration in quality ever since Lenovo took over. Is it only my laptop or have others noticed this as well?
22
Zamicol 1 hour ago 0 replies      
I have a T420 and finally just updated to a Dell E5470 (8V22N).

All of Lenovo's offerings seem overpriced as with the industry as a whole.

23
mathieuruellan 1 hour ago 0 replies      
And 3 years warranty on site, not optional (alreay included in the price).
24
okasaki 3 hours ago 1 reply      
I could buy 3 equivalent laptops (finding IPS might be a problem) for the price. Is the Thinkpad brand really worth that much?
25
aiur3la 2 hours ago 3 replies      
I am sure it is a great laptop, but it also looks like a X1 with larger bezels. Why would you want that instead of X1?
26
tuananh 3 hours ago 2 replies      
despite so many problems that OP stated, he's still recommending it!
16
Swiss say goodbye to banking secrecy swissinfo.ch
39 points by jonbaer  11 hours ago   11 comments top 3
1
denzell 3 hours ago 4 replies      
Only for certain countries: https://www.sif.admin.ch/sif/en/home/themen/internationale-s...

Tax evaders from USA are safe.

2
vikiomega9 41 minutes ago 0 replies      
Why does confidentiality play a big factor here?
3
briandear 51 minutes ago 4 replies      
I feel that governments ought not have a right to my private financial information without a warrant. If they think I have committed a crime, they ought to be forced to prove such a crime. This is no different than every home having 24/7 interior surveillance with the hope that a crime could be discovered.

Governments, especially so-called "progressive" governments have the wrong belief that our money belongs to the government first and citizens are permitted to keep their 'share.' I know many people here think that a total loss of financial privacy is 'fair,' because of some sort of misguided class-warfare, however how would you feel if your entire computer and web history were transmitted to the government with the purpose of identifying crimes you might commit as opposed to being in response to a warrant that establishes probable cause that you actually committed a crime. Having money in an overseas account does not even come close to reaching a threshold of probable cause for tax evasion. However having overseas money (or in my case, a domestic French account,) doesn't mean you are a potential criminal any more than being black means you are a potential criminal. This is financial "stop and frisk."

In the US this loss of financial privacy and these 'disclosures' amount to a warrantless search. I welcome any counter argument.

Just to be clear, I'm not a 'fat cat' billionaire -- just an American living in France that has the 'privilege' of being taxed (heavily) by both the US and France. My French bank has to disclose information about my accounts to the US Treasury and I have to fill out an FBAR each year to 'prove' to the US that I am tax compliant, thanks to FATCA.

If it weren't for the obvious class warfare aspect, liberty minded progressives would be losing their minds over this violation of the fourth amendment. They'll howl over warrantless wire-taps but many remain silent with the financial equivalent of wire-taps.

I am not advocating tax evasion, nor do I support or defend those actually evading lawful taxation; I am opposed to a police state where everyone is a criminal until proven otherwise.

17
Efficient Parallel Scan Algorithms for GPUs [pdf] nvidia.com
29 points by tosh  8 hours ago   2 comments top 2
1
john_owens 7 hours ago 0 replies      
I recommend the Merrill/Garland NVIDIA technical report from March 2016, "Single-pass Parallel Prefix Scan with Decoupled Look-back", as the current state of the art for scan on GPUs.

https://research.nvidia.com/publication/single-pass-parallel...

2
lorenzhs 2 hours ago 0 replies      
Note that this is from 2008. Customarily, this would be noted in the submission title.
18
U.S. Path on Legal Marijuana Forces Rethink in Mexico wsj.com
66 points by tosh  10 hours ago   37 comments top 3
1
Pica_soO 8 hours ago 3 replies      
Unemployed farmers and butchers, that's all that remains, if you state-distribute the drugs and fire the thugs.

With the rise of synthetics, and the first maker-chem-bot not far away- i wouldn't wonder if the Narcos would do what all businesses do, if the change is upon them- lobby for protectionism.

Im against drug liberalization, but just to reduce the economic and social Fallout this would be worth it.

2
h4nkoslo 8 hours ago 5 replies      
It's too late. The cartels were allowed (even encouraged) to grow to the point where they are no longer controllable by the state. If you legalize the drug trade, they will continue to assert quasi-legal control over it the same way that mafias have historically controlled things like garbage removal, and probably branch out into other activities (as they actually have been doing, eg "taxing" avocado farmers). They have no reason to allow themselves to be "outcompeted" by a bunch of MBAs with excellent expertise in logistics but no guns.
3
dmix 9 hours ago 2 replies      
Non-paywall link: http://www.foxnews.com/world/2016/12/27/us-path-on-legal-mar...

The usual web link didn't work for me this time.

19
Unexpected Risks Found in Editing Genes to Prevent Inherited Disorders npr.org
107 points by lobster_johnson  16 hours ago   63 comments top 12
1
gioele 15 hours ago 9 replies      
Unexpected Risks Found in Editing Binary Code to Fix Bugs in Legacy Codebase.

Genetic engineering has exactly the same problems that modifying binary-only code. After some very complex observation of the system you may spot the point that is responsible for a certain problem, but you will never be sure (as in provably sure) that the "fix" you put in place:

1. is going to stop all the occurrences of the behavior you want to fix;

2. will only affect the behavior you want to fix;

3. will have no effect on other systems that you did not modify.

(The article is mostly about 1 and a bit about 2 and 3.)

Modifying legacy code is hard enough when the uncommented source code is around. Changing directly the binary is both a great engineering feat and something to be scared of.

2
mirimir 4 hours ago 0 replies      
> Some research suggests that nuclear genes evolve to sync well with a mitochondrial haplotype, and that when the pairing is suddenly switched, health might be compromised.

Normally, eggs provide all mitochondria. The sperm's mitochondrion gets left outside during fertilization. So in most zygotes, there's a new pairing between the father's nuclear genes and the mother's mitochondrial haplotype.

I wonder what that entails. I do understand that most zygotes fail to implant, or get terminated very early in development. Maybe this is one of the causes. Anyone know?

3
anigbrowl 13 hours ago 0 replies      
The article is good but the headline is terrible, falsely implying some surprising new discovery rather than the sobering actuality that we lack a good theory of genetic development. A better one would be 'Gene Editing's Risks Are Hard to Manage.'
4
jcoffland 13 hours ago 0 replies      
Seeking out and using so called super mitochondria for this therapy could have it's own problems. If future generations only had such mitochondria they could become immune to future therapies. By selecting and actively promoting "healthy" mitochondria we could limit mitochondrial diversity which could lead to diseases which affect huge swathes of the population. Obviously we are not there yet but these are some of the dangers of applying gene therapy to broad populations.
5
marchenko 15 hours ago 2 replies      
The nuclear/mitochondrial mismatch theory is interesting. Communities with a long history of admixture often have a significant number of individuals with mitochondrial DNA and nuclear DNA of divergent geographic origins, but I have not seen it observed that they are subject to any deleterious mtDNA/nucDNA 'mismatch' effects. I wonder what the threshold for this effect is.
6
majkinetor 13 hours ago 0 replies      
Given that this is symbiotic relationship, this probably works in both ways but given that number of genes coded in mitos is low its probably way harder to notice the effect.

So it might happen that even if 100% of 'old mitochondria' is cleared the new symbiosis will also be defective as some of the 40 genes required by the hosting cell might be missing with new mito lineage replacing the original one.

7
reasonattlm 15 hours ago 0 replies      
This isn't about editing genes, it is about replacing mitochondria. Not the same thing at all. Mitochondria are like bacteria in the way they replicate. There are hundreds of them in a cell. They can pass components around between one another, can split and fuse. They are integrated with quality control mechanisms that cull the herd. It is a complicated picture.

As the article notes the procedure isn't effectively clearing out all of the old mitochondria, and dynamics can favor them in the future.

It is known that variations in mitochondrial DNA produce radical differences in competition between mitochondrial strains in the cell. That is how deletions affecting OXPHOS machinery cause one mutant to take over the whole population very quickly - some differences produce mitochondria that either replicate better or resist quality control more effectively. That one is one of the causes of aging, but the same principle exists for other differences between mitochondrial genomes. If you put two or more in a cell and let them fight it out, hard to say in advance what the outcome will be given present knowledge.

So, basically, the people working on mitochondrial replacement need to make their tools for cleaning out the old mitochondria more efficient. If 100% success is achieved, that genome isn't coming back.

Alternatively, actual gene therapy might be a better approach - though challenging if you want to edit mitochondrial genomes, as you have the same problem of coverage and competition. There is allotopic expression, moving mitochondrial genes to the cell nucleus, which is feasible via today's gene therapy. Given the amount of work needed to copy mitochondrial genes into the nucleus, however, the challenge being how to alter them so as to get the proteins produced back to the mitochondria, something that has been achieved for three genes so far, it might be more cost-effective to work on better clearance and replacement technologies for the near term of assisted reproduction needs.

8
tormeh 12 hours ago 0 replies      
What I've thought about is replacing the DNA in cells of adults with an average of multiple samples of their DNA, to correct for the mutations in our DNA that accumulates as we age. I think I've heard something about progress in this area, but I don't know what to google for. Does anyone know if there's any news in this area?
9
Karunamon 15 hours ago 0 replies      
tl;dr: Earlier this month, a study published in Nature by Shoukhrat Mitalipov, head of the Center for Embryonic Cell and Gene Therapy at the Oregon Health and Science University in Portland, suggested that in roughly 15 percent of cases, the mitochondrial replacement could fail and allow fatal defects to return, or even increase a child's vulnerability to new ailments.

It's not a risk of editing the genes so much as a risk of the treatment failing.

10
ufmace 6 hours ago 1 reply      
Anyone else noticing this site to be broken in Chrome? Seems that the CSS files aren't loading due to some sort of security error from their CDN site.
11
damon_c 14 hours ago 1 reply      
Unexpected by whom? Wouldn't risks be the first thing one would expect when editing genes?
12
lngnmn 15 hours ago 0 replies      
A single gene could be responsible for more than one trait? This is genetics 101.
20
Generating Videos with Scene Dynamics mit.edu
44 points by Ivoah  11 hours ago   7 comments top 4
1
undershirt 10 hours ago 0 replies      
Once, when waking from a nap, I was able to consciously keep my "dream eyes" open with my real eyes closed. I could vividly see images my dream was still generating, and it looked a lot like this.

Makes me wonder if I'm just influenced by the work happening in ML, or if we are really approaching what the brain is already doing.

2
gallerdude 9 hours ago 1 reply      
People may call BS, but I do think we'll reach a point where we can generate coherent books or movies. It'll take many more neurons, but I think the possibility is out there.

The real question is where these "fake" pieces of art will be placed in our society.

3
goblin89 3 hours ago 0 replies      
Future prediction capability here can greatly improve video monitoring of all kinds. Run a constantly trained system like this in real time over incoming streams and let the agent observe generated predictive videos N units of time ahead.
4
jedimastert 8 hours ago 1 reply      
>These videos are not real; they are hallucinated by a generative video model.

I'm not sure why, but the fact that they used the term "hallucinated" is a little unsettling.

21
OpenStreetMap plugin for Unreal Engine 4 github.com
313 points by mariuz  19 hours ago   55 comments top 16
1
stephengillie 17 hours ago 4 replies      
And thus we get VR games like Slenderman stalking you, on the streets of your home town! Or GTA 6 - Your City. ARK or Day Z, but set in actual Detroit. Oh, what amazing gaming possibilities this unlocks.

And hopefully, self driving cars and quadcopters and other drone robots can use a 3d model of a city, to better navigate the real world. How about teaching a self driving car, with the GTA 6 - Your City game?

2
mvexel 8 hours ago 0 replies      
The export function on osm.org is a pretty expensive operation on the live database and not the best way to download OSM XML data for most people. If you're looking for a city-sized extract, https://mapzen.com/data/metro-extracts/ is a good source. They have pre-built ones for commonly requested areas, or you can select your own bounding box.
3
anthk 17 hours ago 0 replies      
4
Qantourisc 13 hours ago 2 replies      
Before people get all excited: OpenStreetMap barely contains any 3D data. You can get a layout, and guess the height of the houses, if not tagged/stored in OSM, and they usually are not.
5
karussell 13 hours ago 1 reply      
Reminds me about this: SuperTuxKart

SuperTuxKart is a free, open-source racing game. This page is about generating 3D levels for the game, using OpenStreetMap data.

http://wiki.openstreetmap.org/wiki/SuperTuxKart

https://youtu.be/smf9OCVzwMo?t=1212

6
d33 17 hours ago 4 replies      
> Keep in mind that many locations may have limited information about building geometry. In particular, the heights of buildings may be missing or incorrect in many cities.

How do they figure out the building heights at all then?

7
thomasdd 17 hours ago 0 replies      
Looks cool to me. A Quake/Unreal-like 3D-game, with realworld maps, could be cool :)
8
dleslie 18 hours ago 3 replies      
Would assets built upon the exported XML be considered an "adapted database" in the license terms?
9
gravypod 17 hours ago 1 reply      
Does UE4 support paging landscape?
10
napsterbr 16 hours ago 0 replies      
We are doing something like this for hackerexperience.com :) hope to see more games with real world interaction!
11
orblivion 16 hours ago 0 replies      
I would love to try this on for size just as a desktop application for practical puposes. Would be nicer under a freely licensed engine though.
12
agumonkey 14 hours ago 1 reply      
Any similar thing for Valve Counter Strike engine (whatever it is)?
13
appleflaxen 14 hours ago 0 replies      
Start the countdown to moral panic about the opensource first-person shooter that is set in Washington DC, NYC, or Chicago, and which is contributing to gun violence.
14
dbg31415 14 hours ago 0 replies      
The next installment of XCOM is going to be sweet!
15
out_of_protocol 18 hours ago 1 reply      
Unreal Engine 4, not Unread.
16
tantalor 17 hours ago 2 replies      
Have any example renders?
22
Ask HN: How to learn new things better?
165 points by kahrkunne  14 hours ago   73 comments top 38
1
Asdfbla 12 hours ago 4 replies      
Maybe just throwing it out there as an additional resource: Coursera has a "learning how to learn" course, which includes lots of references about the theory of learning but many hands-on tips too. It's not too time consuming and doesn't cost anything, so probably can't hurt to look at it. I liked it and try to apply some of the ideas when learning.

https://www.coursera.org/learn/learning-how-to-learn/

2
daniel-levin 20 minutes ago 0 replies      
I would advise you to be as experimental as possible, and see what works for you. Each person learns things in their own way. Just picking a particular path (such as using Duolingo, as others have suggested) may work. It may not. I have found that this rings especially true in technical subjects.

My personal suggestions are Duolingo, and "Drawing on the Right Side of the Brain" by Betty Edwards. I went from stick men to badly-proportioned but otherwise lifelike still-lifes in a few hours with this book. I have a very strong audio memory so Duolingo works well for me. The most important aspect to getting not-terrible at anything is deliberate practice [1]. Drills, and boring exercises work very well for me.

[1] https://en.wikipedia.org/wiki/Practice_(learning_method)

3
whyileft 10 hours ago 1 reply      
I don't have much on learning in general but drawing has been a big chunk of my life so I figured I'd just chime in on that.

Drawing is a giant world that means many different things. Being good at drawing is also very subjective.

A fantastic example of that is the book "Drawing from the Right Side of the Brain" which I see mentioned already. That book is an interesting read and I did enjoy it myself, but I should caution that it teaches more about visually tracing. Some people consider that an example of skilled drawing and if that is what you are looking for then go for it.

From another angle some people consider skill at drawing to be how pleasing it is to look at. This generally has more to do with the line work and shading and color usage. You can draw a significantly anatomically inaccurate arm with beautiful line work and styling and some people would consider that skilled drawing.

Yet another would be to create something from the mind without a visual reference. This has more to do with an understanding of mass and depth and space than either of the two above. And to some people this is what they would consider skilled drawing.

These are only three of the many, many possible goals of a drawing.

Why am I telling you this? Because to me the endeavor of learning to draw is learning what you personally consider a good drawing. The physical world is not made up of lines and smudges. When you draw you are continually making those translations and decisions. That process of discovery is what will lead to you become better at it.

In the end, there are only two reasons why you put a line in the wrong place. Either you physically missed the correct spot with you pencil, or more likely, because you haven't discovered where the right spot is yet.

4
adpoe 5 hours ago 1 reply      
I've decided to learn many, many things over the years: programming languages, human languages, musical instruments, drawing/painting/art, sports, math, chess, etc...

What it comes down to is spending time getting your hands dirty making things (or getting real practice), even if your output sucks for a long time. (And it will.)

Favorite personal example: One day, when I was around 19, I decided that I wanted to be an artist. I hadn't seriously drawn anything since I was about 10. My current skills were atrocious, but I started drawing every single day, anyway, undeterred.

Of course, at first I was awful. But I copied old master paintings, drew pictures of famous sculptures, etc.; all of my free-time, I spent drawing. And slowly, but surely, I got better at it. I did this every day for probably about 3 years or so, and by the end of it, I was very accomplished. But it was a constant effort that took years. I probably did over 1,000 drawings, hundreds of paintings, and so on. And about 90% of them were awful. But the good stuff, it was really really good. I guess that's the price sometimes. Nobody is a genius all the time. Even Michelangelo, or Picasso.

The thing is, if you find something you enjoy, it doesn't feel like work or drudgery. (Even though drudgery is the only way to get better.) Instead, it's an activity that you want to spend time on, and when you do--time passes so quickly you don't know where it went. It's like living life on fast forward. (Maybe that feeling's the real-life inspiration for the old trope of the training montage. A deep kernel of truth beneath the fantasy, after all?)

5
throwaway_proc 33 minutes ago 1 reply      
I want to strongly recommend this book on procrastination: https://www.amazon.com/Procrastinators-Digest-Concise-Solvin...

I started studying (math) at university this fall for the first time in my life. I'm 32 years old. I have extreme problems to adjust myself to the workload that is required.The other freshmen struggle as well, but I have clearly more problems.

The problems arise most noticably when I'm not subverted to direct peer pressure, that is, when I'm not sitting in university to do homework with my group partners.As the workload is (or seems) so extreme, at least for us freshmen, I just didn't have time to do anything else than sit in uni to do homework, often until 8 or 10 pm or even into the night when there was a deadline the next morning.

What I should have done differently so far is prioritizing the learning of material over just trying to get stuff done inefficiently.I realise that these inefficiencies and getting rid of them are a normal part of growing up academically.

The procrastination problem starts to show up most visibly in my spare time, where I have the time but just cannot bring myself to learn the material. This is where the book really helps. I admit, I just finished it and it will take some time to show results.The thing is, I knew for years (which I have wasted partially) that gaming, reddit, HN, twitch.tv, etc. are a strong negative influence for me. The book helped me realise just how bad my procrastination problem really is and it already helped me be more productive in situations where otherwise I just couldn't bring myself to work on important stuff due to distractions.

6
tudorw 1 hour ago 0 replies      
You could try this technique; MAP training: combining meditation and aerobic exercise reduces depression and rumination while enhancing synchronized brain activity

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4872427/

7
gargarplex 8 hours ago 3 replies      
I enjoyed duolingo (https://www.duolingo.com) for learning Swedish. I liked the game mechanics, honestly. Here's why:

* Positive reinforcement: cheerful noises and visual progress once I completed a section.

* Negative reinforcement: if I didn't practice at all that day, I would get a notification at 11pm. If I ignored for a while, they were super passive aggressive, saying things like "These don't seem to be working. We'll stop sending them". I felt guilty and would start agin.

* I wanted to keep my daily streak going. It made me feel like Jerry Seinfeld with his "write a joke a day; put an X on the calendar" technique.

* I liked the concepts of experience (exp) and levels; it let me feel like I was making concrete progress, even if I was totally incompetent. I indulged my gamer side while still being productive!

* duolingo works just as well on browser as mobile

* Training sessions were short enough that if I only had a few minutes of downtime, as long as I had my phone on me, I could actually be productive. This made subway rides that much better.

* duoolingo offers a practice mode where I could work specifically on my speed if I had longer chunks of time and wanted to dive deeper.

And today, I can totally speak intelligible Swedish [1]. It worked.

[1] https://www.youtube.com/watch?v=OxoXe5FDIkA some phrases are off (I should've said "hur man skter marknadsfring sjlv") and pronunciation is off.

8
welanes 10 hours ago 3 replies      
So, I've spent the last 7 days working on a new feature to help people set and work towards goals.

The concept goal: Learning Japanese! (I've been 'learning' for 3 years now).

Here's a screenshot: https://i.imgur.com/afAW49V.png.

Basically it's a moodboard that combines a timer, todos, insights, notes, images and links.

'How does that help me learn new things better' you ask?

The challenge with goals - as I see it - is keeping the path from the you of today to the aspirational you (the one that speaks Japanese) clear. January kicks off, a whole bunch of life gets in the way and when you finally get time to focus that path has become a nebulous mess.

The idea with Goals is to be able to open the app and immediately know:

1. What have I achieved so far: insights on hours spent, tasks completed and how close you are to your goal.

2. What do I have to do next: this is your "how". Tasks, links, audio files, notes.

3. Why am I doing this, again? images, media, notes.

The 'Why' doesn't really fit into most methods of learning but I think forgetting this is the biggest point of failure.

I'm going to grab a coffee and get this shipped. I'll post it to Show HN tomorrow and you can see if it's for you.

9
sp3n 10 hours ago 0 replies      
I have been learning to draw for the past 2 months or so. I want to get into digital painting but I am still learning the fundamentals with a pencil + paper.

A few things that have helped me so far:

- Setting aside at least 1 hour a day to draw. This one is the most important.

- Drawing from the Right Side of the Brain by Betty Edwards

- Fun with a pencil by Andrew Loomis

- Ctrl+Paint: http://www.ctrlpaint.com/ especially the Traditional Drawing, Composition, Perspective and Anatomy sections.

- Sycra's YouTube channel: https://www.youtube.com/playlist?list=PL0373FA2B3CD4C899

10
wasyl 2 hours ago 0 replies      
From my experience it's amazing to spend at least 20 minutes every (I mean every) single day on the thing you're learning. Whether it's playing an instrument or learning language, there are days when you seem to not have time for that - but 20 minutes is easily found, and it really, really, really helps. I think I've seen some research regarding that, but I might be remembering wrong. What I know for sure is that it both helped me learn more efficiently, but also helped me through weeks of lesser motivation
11
ObeyTheGuts 1 hour ago 0 replies      
To draw nice quick just focus on fundamental - perspective, creating 3d illusion on 2d, look up coil technique drawing on utube, it really makes even ur stick figures look pro
12
kayman 10 hours ago 1 reply      
Block out your calendar to learn a specific task.

Don't overthink. Just do it.

And after a few months of trying and reading the process will get better automatically.

Key is to just start and be regular.

13
smcl 3 hours ago 0 replies      
For your language learning once you get to the point of "I can write sentences reasonably well and can fill most gaps using a dictionary" I'd recommend keeping a little blog or diary and writing to it every other day. It would be even better if you had a Japanese friend who could do a little spelling/grammar checking for you. I've been doing this for a while over email/text with Czech, but have started putting it on a blog - im only three posts in but it's already very satisfying to look back at what I've got. And they can be SUPER mundane and simple too, mine look like they could be written by a child (without the alcohol) - http://czech.mclemon.io
14
simonhughes22 10 hours ago 1 reply      
Spaced repetition is very effective, although again it does require flash cards, but optimizes (i.e. minimizes) the frequency at which you have to review them based on how you are learning.
15
wturner 12 hours ago 1 reply      
What works for me is to integrate the thing I want to learn into my lifestyle and do it everyday. If I have a day where I am inspired or particularly creative I just funnel that energy into the topic. Along the way I give myself small project goals that emerge from exploring the activity.

The ideal scenario is to become immersed so the topic becomes "part of you".

I also give myself a two year gestation period of incremental learning to see results and build muscle memory. I did this with programming and once you do it with one topic, you will build the confidence that it will come to fruition with anything else you decide to do.

I'm not a fan of peddling the idea that anything worth learning can be learned "fast". It may work for some people, but I think they are the minority. it's been my experience there's usually a fair amount of self deception involved in "fast" learning....or "fast" anything for that matter. This works for some people - I'm not one of them.

If you want to learn a new language then walk around with headphones listening to people speak it. When you talk to people in English in your head ask yourself how to say the same thing in the alternate language. etc....

16
CuriouslyC 12 hours ago 1 reply      
Check out Dr. Robert Bjork's page on desirable difficulties (https://bjorklab.psych.ucla.edu/research/).

Additionally, you need to write about whatever you're learning. Essays, reviews, whatever strikes your fancy. Learning really happens when you try to use the information.

17
JoshTriplett 12 hours ago 0 replies      
Even if you're a self-taught person, you don't have to learn everything that way, rather than (for instance) taking a class or otherwise going through a course of study. Not necessarily a university class; check out your local community center, or seek out a local artist that also teaches on the side.
18
medell 11 hours ago 0 replies      
The first time I learned a new language I didn't use these tools and I failed miserably. Looking back, I'm thinking "Why did I use 300 hours and spend $X of cash?" without taking the time to learn about learning.

This is especially true with languages, unless you can be fully immersed, but I still recommend it. Look into metacognition and spaced repetition. You'll need a system that works for you but look for techniques backed by research.

Specifically, Fluent Forever is a fantastic book on learning how to learn languages and Scott Young's blog mentioned is great.I second the art recommendation "Drawing on the Right Side of the Brain". Good luck!

19
colearn 3 hours ago 0 replies      
Find someone to learn with you. It is much easier when you are learning with someone. You are much less likely to loose motivation and can focus.If you can not find someone in your friends then you can use https://colearn.xyz to find someone to learn with you.
20
xiaoma 12 hours ago 1 reply      
Wow! I feel like we're kindred spirits! Drawing on the left side of the brain is an incredible book that did it for me. I've been writing about language learning all over for years, but this interview with Gabriel Wyner is a great spot to start: http://lingsprout.com/en/experts/gabriel-wyner-making-your-o...

He was in the process of studying Japanese at the time of the interview and had some specific comments about it vs the several languages he's learned before.

21
hoodoof 6 hours ago 0 replies      
Took me many, many years to "learn how to learn".

The key thing to understand is that when learning something new, you will likely be pretty bad at it for a long time until you have practised and researched it ALOT. Then you will find you have gained some expertise.

When you anticipate this, I think you are much less likely to give up on the basis of "I'm no good at this".

Anyone can become competent at pretty much anything, given effort and practice.

AND IMPORTANTLY, by "practice" I mean doing it for real, not doing training exercises.

22
buzzybee 6 hours ago 0 replies      
Find a cyclical, ritualized behavior that you can engage in as a way to ramp up towards your main task of the day. For example, first you clean your tools, then you do a warmup, and then you are prepared for something high intensity. You can learn a lot of things by engaging persistently in the warmup activity, and as you feel able, you can "break through" to deeper levels. Don't jump around to different intensity levels or try to force calendar time on them; smoothly build up and then break.

This is how I'm trying to make it work, at least.

23
snailletters 6 hours ago 1 reply      
I am currently learning Japanese, I started only a month or so ago. I have found two amazing resources (made by the same people,) that I'm sure will be of a great help to get you started with Japanese. They have a text book called TextFugu, that not only teaches you Japanese, but they walk you through the hard parts of getting started and essentially teach you to motivate yourself to learn. They also have a spaced repetition online program (similar to Anki,) called WaniKani that helps you learn Kanji.
24
cha-cho 5 hours ago 0 replies      
Kathy Sierra gave an informative talk on the topic of learning at the 2015 O'Reilly Fluent conference. It's entitled "Making Badass Developers".

https://www.youtube.com/watch?v=FKTxC9pl-WM

25
imh 9 hours ago 0 replies      
Practice and persistence. It's as simple as that. You won't learn much unless you make time for it. Don't focus too much on learning perfectly the right way, just get started and keep at it.
26
akbar23 9 hours ago 0 replies      
Check out Beyond Brilliance-a book that was just released out of UC Berkeley designed to teach you how to learn. www.beyondbrilliance.org
27
garysieling 12 hours ago 1 reply      
I find learning more motivating within the context of a project, although that is more relevant to drawing or tech knowledge.

Also, I built a search engine for lectures, which has a lot of talks from tech conferences, which I find helpful for learning about software development topics - https://www.findlectures.com

28
vskarine 12 hours ago 2 replies      
Tim Ferriss wrote a book on how to learn things fast. It's called 4 Hour Chef: https://www.amazon.com/4-Hour-Chef-Cooking-Learning-Anything...He gives a lot of tips but it's up to you to test them out and see what works for you.
29
JesseAldridge 13 hours ago 0 replies      
I just read the audiobook version of this: https://www.amazon.com/Make-Stick-Science-Successful-Learnin...

It's pretty good.

The main idea is that learning is supposed to feel hard. That sense of frustration and confusion is what building new neural connections feels like.

30
egypturnash 12 hours ago 0 replies      
With any new skill you need to be willing to sink a lot of time into it. And you need to be fine with being absolutely terrible at it in the beginning.

I usually tell people who want to learn to draw to go to http://johnkcurriculum.blogspot.com/2009/12/preston-blair-le..., get the Preston Blair book, and start doing these exercises by master animator John K (creator of 'Ren & Stimpy'). You will get a lot better, a lot faster. These exercises focus on simple cartoon characters who wear a lot of their construction on the outside; once you can draw cartoon characters, you can keep drawing more of them if that's your thing, or you can build on top of that and start learning anatomy and drawing more complicated characters. (Or do both.)

There's other well-regarded drawing courses on the internet and someday I should probably pick a new one to send noobs to, what with John K kind of being an asshole - but I learnt a hell of a lot when I worked under him, and he is really good at teaching this stuff.

Most of what I know about drawing more complicated figures came from a combination of Bridgeman's "Constructive Anatomy" and Loomis' "Figure Drawing for All It's Worth", and a life drawing teacher who hewed very closely to Glen Vilppu's drawing manual. If you can fit some life drawing classes into your life then TAKE them, you will learn a ton.

Also: Make a space in your life to do this. I ride the bus a lot, and before the advent of smartphones, I'd have little to do to amuse myself besides stare out of the window, read a book, or pull my sketchbook out and draw. Maybe draw some idea floating around my head, maybe draw something I glimpsed out the bus window, maybe something based on my fellow passengers, maybe just some cubes, or the hand I wasn't drawing with. I got a lot of practice in without feeling like I was making myself "practice". Whatever you may be learning, if you regularly drop yourself into a time and place with nothing much to do besides the thing you wanna learn, then you'll do it more often.

Don't blow several hundred bucks on a ton of paints, or on pro software and a Wacom tablet. Just start with a few hardback sketchbooks and some pens and pencils. Oh, and not mechanical pencils. Just grab like a pack of Ticonderoga 2.5Bs, they're cheap and pretty good. And try holding them so that the side of the point addresses the paper for a lot of the beginning of your drawing; this will do several things for you:

* it will train you to keep your wrist fairly steady, and to draw more with your entire arm; keeping your wrist straight and steady will help keep the Carpal Tunnel Fairy away.* it will make your initial lines light, and prone to fade away as your hand brushes the paper; this keeps you from bearing down to gouge an impossible-to-erase line in the paper, and gives you more room to make mistakes before having a dark, illegible mess of lines you can't draw over.

Don't get lost in trying to save a drawing, either. Paper's cheap, turn the page and try the same subject again, or a new one.

When you make a picture you like, hang it over your drawing board, turn it into your computer's backdrop, and keep trying to draw something better than it. You may find yourself hating it because you start seeing all the mistakes. That's great - go draw something new that doesn't make those! (This may take many tries, some mistakes are harder to stop making than others.)

Don't worry about "your style". If someone points out a mistake in your drawing and you find yourself wanting to say "but that's my style!", then you are just covering up your weaknesses unless you can actually sit down and bust out a version of the drawing that Does It Right. When you can do that you can legitimately say "dis mah style". Steal stylizations from artists you love (you're looking at other people's art, right? A lot?), make your own based on reality.

You will find a lot of people declaring "rules" of drawing. Always do this, never do that. The truth of the matter (IMHO) is that all rules of art are actually just warnings: "never do this" really means "if you do this without thinking about what you're doing it'll probably turn out badly". Know the rules, know which ones you're breaking, and break the fuck out of them while staying well within the boundaries of the other rules you know.

(I spent a decade in the LA animation scene, then burnt out and draw comics now. If you wanna look at my work to decide if I'm someone who you should listen to in this, it's all at http://egypt.urnash.com)

31
du_bing 10 hours ago 0 replies      
I think you should feel that things are done frequently. So decompose big things you want to do into little, viable things, and do one little thing a time. So you can feel that you are successful every time, it's great, then it's more likely that you can continue.
32
gravypod 11 hours ago 1 reply      
I've not done this, but intend to, if you want to learn Japanese I've had many people recommend this to get started: http://store.steampowered.com/app/438270/
33
itamarst 10 hours ago 0 replies      
"Design for how People Learn" is pretty good book on helping people learn.
34
pps 11 hours ago 1 reply      
I was learning with his method, works perfectly https://www.scotthyoung.com/blog/myprojects/portrait-challen...
35
rm_-rf_slash 10 hours ago 0 replies      
I learned Chinese at my best pace by watching Chinese TV streams while practicing calligraphy over and over, as well as studying vocabulary and grammar with the streams of Chinese TV in the background.

Chinese TV sucks. I paid no attention to the ctl+c-ctl+v plots. However it did help me learn tonality in a way that the butchered American classmate pronunciations could never do. Subtitles also helped with learning characters.

I have found that learning works best with as much immersion as possible. It is never as casual as a subway commute crossword puzzle.

36
dualogy 4 hours ago 0 replies      
> However, I find it pretty difficult to pick up new things.

That's the whole idea of it, sonny.

> Learning a language is also pretty intimidating

If you're like most people, then the very experience of having learned your very first language, your mother tongue, was probably "quite the struggle, that you never experienced as quite-the-struggle, because you had no preconceived notions as to what constitutes quite-the-struggle".

Doubtlessly everyone keeps at it that finds it somewhat gratifying. Question then, when is it gratifying and when not? I posit it's gratifying not primarily when you garner praise or grades from others but simply when you realize you grasped things about it this week you didn't, or had no idea of, just the previous week.

Just be a kid, poke holes in everything, bend it, try to break it, combine everything with everything, laugh or marvel at what results.

Now languages and drawing are a bit different. What's the point of "learning to draw" when you can't draw the most outrageously "you" way. Don't draw "nicely", that should evolve over time. Draw what comes naturally to you. If only random lines come to you at first, great, that's the first annoyance that'll before long force you to figure out the trick to arriving at slightly-less random figures. Go wild. Languages are slightly different as at the end you want to comprehend and be comprehended. Maybe human languages are somewhere in the middle of the spectrum from wild drawing and highly-restricted formal grammars such as programming languages. If you keep tinkering at these extreme-ends-of-the-spectrum, as always things more-in-the-middle might fall into place a little more easily.

Where am I getting with all of this? Learning (anything) from first-principles by falling-down-and-getting-up and trial-and-error and not-constantly-assessing-your-current-proficiency is the long and hard way, but it's the surefire way and the natural way. And for many, certainly in this crowd I'd wager, the most gratifying one.

> Learning to draw especially is pretty overwhelming for me; I have no idea how to start, as someone with no skill or experience in drawing whatsoever.

Well what would be the point of learning if it wasn't overwhelming, if you already knew where to start and where to go from there, if you already had the skill and experience. I truly do wonder now what your definition of "learning" is ;D

I found whenever I invested much time in just enjoying in a deep, "almost professional-fulltime-fan" way, the works of highly skilled creators you respect and admire in a topic (painters/certain comic drawers, musicians when it comes to learning instruments or composition, or for languages brilliant authors of awesome works as well as perhaps poets/songsmiths) the repeated and active and prolonged immersion in their work can set the stage properly and "pre-seed your brain" in profound hard-to-explain-or-analyze ways. This very period of active admiration irresistably leads parts of the brain on a diversionary trail of "just how did they achieve all this brilliance" that'll keep finding new leads and cues to then prompt you to purposely proceed with in earnest.

Quite wordy, huh? I'm sure there's a 1000 handy "learning anything you want in 21 days" guides out there also. Shame I never felt the need to procure one, my I could be a master painter and most proficient converser in a whole host of languages by now! Wouldn't that be impressive. But this never seemed like fun. Wanna learn for fun and with fun, set small goals and even smaller expectations, and allow as much time as possible. Maybe it's just me but "I'd like to be a great painter (or French speaker) in 21 years" sounds like a much more delightful endeavour than in-21-days (or weeks). Because if that's the outset, chances are as a byproduct you'll already be "really quite decent, better than you expected" after 21 weeks to months but more importantly, by that time you'll no longer even worry about this, as keeping immersedly spending much time with X, Y and Z became just part of who you are as-a-rule.

That's probably the most wordiest way I've ever said "Just Do It and Keep At It". Well I've done my silly deed of the day, time to get back to my own hackery now.

37
adamnemecek 12 hours ago 1 reply      
I've observed a couple of things about the way I learn and I think that these are pretty general

a.) Print is a lot better than digital.

b.) You shouldn't read books linearly. I generally jump around a lot and read a particular book several times. The first pass might take just an hour or two, I generally try to understand the structure of the book, create a scaffolding of sorts, I might get 15%. During the second pass I might try to get the next 30%. I should have a good idea of the concepts of the book, I might not be able to solve all the problems. In the next pass, get the next 30%. The fourth pass is optional if you really need to understand 100%.The best part is that a lot of times, you don't actually have to do all the passes, the first two might be enough.

The one thing I always hated about school is how you are forced to master each chapter 100% before moving forward. Sometimes going forward actually helps you understand previous chapters because it puts them in context.

c.) Highlighting helps me a bunch. Some people have the issue that they might end up highlighting too much. I don't highlight when it's all new to me, but only after I might have finished the chapter, I'll go back and think about what's important to highlight. It feels like the process of selecting what's important might be more important than the highlighting itself. But when you come back to it later, the highlighting definitely helps. Writing some notes with a pencil in the book is also good.

d.) More important than fully mastering all the material is making sure that you aren't bored or frustrated. If you can't move forward with something, give it some time, come back to it.

e.) Generally if I'm confused, doing a quick review pass from the very beginning of the book tends to clear things up a lot.

f.) Doing a "compare and contrast" between things that seem similar (or even if they don't) is usually a good way of strengthening some connections.

Btw, over the last couple of weeks, I've been trying to learn ML almost full-time. In the process, I think I managed to figure out what are the best resources for this and I'm in the process of setting up a website discussing what I've found. I started working on this yesterday so it's not quite ready yet. However, if you'd like to check out ML in 2017, I'm hoping to make the process a lot less painful. You can sign up here if you'd like to get notified when it's ready

https://docs.google.com/forms/d/e/1FAIpQLSfnksZmz7oH9Vpjtxp1...

38
melling 9 hours ago 0 replies      
If you want to learn a language, find a cheap immersion school in a country and spend 3 months there. Guatemala is great for Spanish. I hear Montreal is good for French.

Drawing... well, I'm not having much luck with that. Did 6 months of drawing 1 hour a day last year. Think I got a little better. Reddit has a couple of useful groups to follow:

https://www.reddit.com/r/learntodraw/

https://www.reddit.com/r/ArtFundamentals/

23
$2T in Proceeds of Corruption Removed from China and Taken to US, AUS, CAN, NL antimoneylaunderinglaw.com
126 points by kspaans  7 hours ago   115 comments top 16
1
a_bonobo 7 hours ago 6 replies      
This is a relatively big problem in Australia - there has been talk of a housing bubble in Australia for the last few years which in part has been propped up by Chinese grey money buying whole apartment buildings for investment, which drives up prices (they can just dictate prices) and keeps young locals out [1]. I cannot afford a house and I have a well-paying job.

The government looks the other way since building is one of the last big job creators now that the mining boom is over.For example, there is a government body for foreign investment breaches (rules like you're not allowed to buy 'used' property if you don't have permanent residency, but I've personally talked to quite a few Chinese students whose parents did that for them) but it has never once initiated court action [2].

So everybody profits in the short term - Chinese corruption can move 'dead' money out of the country, Australian government gets to present itself as a job creator. Except young Australians, who have to rent and cannot rely on real estate property for their retirement.

In the long term, the bubble is going to pop and then you have dead cities with empty high rises falling apart (but then again, people have been saying for at least 10 years that the bubble is going to pop any day now and it's still inflating)

[1] Contains no numbers: http://www.smh.com.au/comment/grey-money-from-china-helps-bl...

[2] http://www.abc.net.au/news/2014-11-27/foreign-buyer-rule-enf...

2
RandyRanderson 5 hours ago 0 replies      
BC is by far and away the most impacted region. Let's list some quick housing prices examples:

NL - Amsterdam up 15% in 2016 [-1]

AU - Sydney up 17% over 5 years [0]

NZ - up 13% oct 2015-2016 [1]

BC - Metro Vancouver up 31.4% August 2015-2016 alone. BC (not just vancouver) housing prices have almost tripled since 2004. [2]

No, these new rules the BC government has put in place are not going to work. I've commented on the reasons before here:

https://news.ycombinator.com/item?id=12873156

https://news.ycombinator.com/item?id=12215490

[-1] http://www.globalpropertyguide.com/Europe/Netherlands/Price-...

[0] https://en.wikipedia.org/wiki/Australian_property_bubble

[1] https://www.bloomberg.com/news/articles/2016-11-02/the-rich-...

[2] http://www.cbc.ca/news/canada/british-columbia/26-slump-in-v...

3
jamhan 7 hours ago 0 replies      
In Australia we were supposed to have tranche 2 ("covering real estate agents, lawyers, accountants, car dealers and others") of the anti-money laundering regulations introduced over 10 years ago. As this article details, it is "comically" overdue:http://www.smh.com.au/business/banking-and-finance/slated-an...

While political parties can wallow in the extra taxes garnered from sky-rocketing real-estate prices and sizable party donations they have no incentive to introduce this second tranche.

For example, any party in power in New South Sales will keep its budget in the black simply from "stamp-duty" taxes collected on real-estate transactions.

The Foreign Investment Review Board was and is a farce and that's how the government likes it.

4
sudeepj 6 hours ago 1 reply      
In India, "grey income" is called "black money". It is a hot topic in India right now.

Recent activities on this:

Note ban (Nov 2016) => https://en.wikipedia.org/wiki/Indian_black_money#Ban_on_1000...

Using analytics (today's news) => http://economictimes.indiatimes.com/news/economy/policy/i-t-...

5
kspaans 7 hours ago 1 reply      
Interesting point on the Vancouver angle:

 Q: In Canada, where does the money end up and why? A: Vancouver is the preferred destination, by far, because of perceived more relaxed anti-money laundering on-boarding compliance and more importantly, easier access to better schools and lifestyle for children of Chinese foreign nationals.

6
tehlike 5 hours ago 0 replies      
I wonder what ways US could prevent Chinese money buying real estate in the US?

1. Do what Vancouver did, add tax for foreign property ownership.2. Tighten the checks on origin of money? How exactly, especially when it's coming as all cash? Maybe forcing it to go through a bank, where more thorough check is mandatory per IRS? But then again, banks are not the most trustworthy in this country.3. Other ideas?

7
coygui 6 hours ago 1 reply      
I study at Vancouver UBC and rented a room in a house in the nearby area. I could tell that the Canadian government has started to investigate the corrupted $ because one day a government official came to "my house" and asked the information about the previous landowner. It seemed that he used the corrupted $ to buy the real estate and make more $
8
sytelus 2 hours ago 1 reply      
Simple question: when some invests his corruption money in US, doesn't it automatically become "public" and hence traceable? For example, if you buy a house for X dollars in US then shouldn't Chinese government be able to easily trace you down? My understanding was that so called "black" money stays black only as long as you use it in form of cash, not investments.
9
non_repro_blue 6 hours ago 2 replies      
That's a big number, way too large for anyone to take personally in an emotional sense. But it's definitely big enough for a purge. Jailing, disappearances and more.

This is a pretty large economic tide. Almost twice the size of California's economy, all at once. Ranking it The Fifth Largest Economy, worldwide, if it's worth thinking in those terms.

Given that an approximate number of participating Chinese nationals have been enumerated, if they suddenly disappeared (and ~20,000 is certainly within the realm of possibility), what would a sudden halt of two trillion (legal or not) do to the rest of the world?

10
mathattack 5 hours ago 1 reply      
Interesting about this on a per-capita basis. Is ~$2K per citizen a lot? I wonder what comparable #s for the US and Russia are.

(Note that with 16,000-18,000 people doing the moving, it's very large per intermediary)

11
pizza 7 hours ago 0 replies      
Wonder how this correlates with the current growth of the US economy, and maybe more relevantly, how the top 1% seems to be increasing its wealth much faster than the rest.
12
toodlebunions 5 hours ago 0 replies      
And it all poured into real estate.
13
TaylorSwift 7 hours ago 8 replies      
What's wrong with keeping the proceeds in China?
14
ForFreedom 3 hours ago 0 replies      
Might also explain a lot of Chinese moving into Australia.
15
peteretep 6 hours ago 0 replies      
Wonder why the British Isles isn't in there; unlikely to be EU stuff as NL is on there.
16
andrewvijay 6 hours ago 0 replies      
I thought around 500 times to buy myself a car on christmas and then I see this
24
The Developer Marketing Guide devmarketingguide.com
276 points by craigkerstiens  19 hours ago   52 comments top 13
1
siddharthdeswal 15 hours ago 1 reply      
Here's how in-depth this link is, presented in (hopefully) easier to understand context for most people here:

"The Web App Development Guide for Marketers"

1) Get a database because they are the where all your data will be saved. "You should have a db if you have nothing else."

- MySQL book link

- Relational databases video

- more intro-to-databases links

2) "Fancy having a frontend?" A frontend is needed because your users will want to use your awesome app

- HTML link

- CSS link

3) Do's and Don'ts

- Have forms and buttons

- Don't make the forms too long. Here's some research to show long forms lead to lesser people submitting them.

... and so on and so forth.

2
stevoski 17 hours ago 6 replies      
> Email might not be the most attractive means of communication, but newsletters are direct and exclusive to customers, a good starting point. You should have an email newsletter if you have nothing else... This is the one opportunity you have to make something just for them that no one else consumes.

I'd love to see some broad evidence that email newsletters are effective. I've experienced no strong correlation myself between the frequency of sending newsletters and sales.

The strongest argument I've heard for email newsletters is, "it can't hurt, so you should do it just in case it helps." But that ignores opportunity cost.

Can anyone (preferably who doesn't work for a email campaign company) give any strong evidence in favour of regularly sending out email newsletters?

3
chasenlehara 17 hours ago 1 reply      
This post doesnt go into detail about how to accomplish a lot of these things effectively, so lets share some of our favorite resources.

Ill start: How We Got 1,000+ Subscribers from a Single Blog Post in 24 Hours https://www.groovehq.com/blog/1000-subscribers

4
minimaxir 16 hours ago 4 replies      
The problem with marketing as a developer is that there is a fine line between helpfully increasing awareness of your product/service and being an asshole about it with modals which interrupt blog post reading and "growth hacking" of unsolicited email/Tweet blasts/etc.

Silicon Valley has been encouraging the latter because it works, to my frustration. One of the reasons people ask for upvotes on Product Hunt is that everyone else is doing it and there are no visible consequences for doing so. (and I've recently found out that people do the same for HN votes on occasion because they assume it's a part of the culture)

5
mariusmg 18 hours ago 1 reply      
Isn't this a bit ..."basic" ? I mean sending emails and having a blog is not exactly the pinnacle of marketing (especially for developers).
6
ssharp 9 hours ago 0 replies      
re: retargeting

Before you go off shoving fistfuls of money at display ad retargeting platforms, I'd highly suggest running placebo tests first. The platforms will tell you how you're converting people into paying customers at a $2 rate, but with retargeting, you've previously acquired their attention in some form -- that's how you're able to retarget them in the first place. The problem is that, particularly with display ads and Facebook feed ads, you tend to cookie bomb your audience and you're not getting a fair assessment of how many people actually stayed engaged with you because your ads in some part.

In nearly all my experience, the actual value of retargeting is never what the platforms tell you it is in their ridiculously misleading CPA reporting.

7
anacleto 15 hours ago 1 reply      
If you're interesting in this topic read everything Patrick McKenzie writes: http://www.kalzumeus.com/.

This is guide is not "a bit basic", it just barely scratches the surface.

8
yenoham 17 hours ago 2 replies      
Has anyone here gathered a bunch of resources similar to this, especially around early-days bootstrap Marketing for SaaS?

We created our product as a group with engineering backgrounds and now trying to switch some to full on Marketing mode and trying to grab anything I can get my hands on.

9
nrjdhsbsid 13 hours ago 0 replies      
This is a good 1000 foot overview but they leave off some specifics that could burn you bad.

Blog content: they say writing anything is better than nothing and there's some truth to this, but compelling content is far more important. One thing that can happen (seen it) is you hire some content monkeys to make you "content". The result is bland and reads like stale Cheerios. Google will knock your site ranking badly if users don't like to read your articles.

When I started writing content for this client my first article got more organic traffic than their whole site with hundreds of pages of "content". This continued with each article I wrote until 99+% of organic traffic to the site came from around 10 articles I wrote.

Biggest thing with content is it needs to be good quality. Long form articles with pictures work the best. The other extremely important point is to tailor the subjects to your readers.

If you sell can openers write content about cans and cooking with canned stuff. Write about ways to open cans when your opener breaks. Write about what to look for in good can openers. Write about how canned food is made. Become the one and only website about anything can related. Google questions your readers are likely to type and get links to your site on the top search results for those questions. Even if you have to pay some site owners it will boost your rank tremendously.

Make sure a human visiting your site would think it was well made and google will to. Use cdn's to make it load fast. The latest and greatest TLS certs. Fully verified email addresses linked to you domain with all the bells and whistles. Make your site seem legit enough that users would feel okay using their credit cards there.

Email: make sure the emails you send are things your customers want to get. Know the demographic of your customers and tailor your message carefully. It goes far beyond subject line content, if you annoy your customers your emails will get binned as "promotions" and nobody will see them.

10
mcjiggerlog 18 hours ago 4 replies      
This is perfect timing for me - I've just finished building my side project (http://www.artpip.com/) and feel like I've made something people would want to use, but was unsure about how best to get the word out. Thank you!
11
kumarski 13 hours ago 0 replies      
1. Get Leads. (get a good trigger signal)

2. Send Emails

3. Build Landing Page or other.

4. 1-3 Repeat and modulate landing page and email template as you get more customer conversations.

12
Mister_Y 18 hours ago 0 replies      
I love this!I always believed that the best developer is the one that has a taste for business and this thing is in the line of it :D
13
ensiferum 12 hours ago 0 replies      
Get a botnet to upvote your blogs/product releases on HN/Product Hunt/Reddit/etc.
25
NYC's brand new subway is the most expensive in the world vox.com
113 points by jseliger  10 hours ago   59 comments top 11
1
hackuser 7 hours ago 3 replies      
Imagine if someone asked you: Highly complex software project A costs $x/KLOC, highly complex project B costs $10x/KLOC, and therefore isn't project B wasteful? It's impossible to answer based on that information, and you probably know far more about software projects than you do about underground urban transit projects.

Many of the comparisons in the article seem to be similar to that, and with projects that seem far more complex, technically and politically, than what most of us deal with. Other than the fact that all the projects mentioned are called 'trains' or 'subways', and I assume are mostly underground, I don't have enough information to say they are comparable at all.

Sometimes, there is no way for the layperson to analyze the situation on their own.

EDIT: Minor edits

2
JumpCrisscross 10 hours ago 3 replies      
Phase One of the Second Avenue line cost $4.5bn [1]. That's about as much as it cost us to build our "Stegasaurus" subway station downtown [2]. Until elected officials lose elections as a result of cost overruns it will be prudent for leaders to divert resources to efficiently-voting public unions.

(MTA officials say the Second Avenue Subway cost as much as it did because of Manhattan's "complex underground infrastructure" as well as the fact that the New York City Subway runs all the time [3], the latter not being a requirement of Paris or London's systems.)

[1] http://www.thedailybeast.com/articles/2016/12/31/here-s-why-...

[2] http://therealdeal.com/issues_articles/the-path-to-4-billion...

[3] http://www.amny.com/transit/second-avenue-subway-cost-concer...

3
Sami_Lehtinen 1 hour ago 0 replies      
Is every subway project going to be failure? Here's example from Helsinki & Espoo, Finland.https://en.wikipedia.org/wiki/L%C3%A4nsimetro#CostConstant cost & schedule slips. Nobody knows when it will be actually ready. Here's the official news feed: http://www.lansimetro.fi/en/home/news.htmlYet that's still 13 new stations, 21 kilometers for 1.2 billion (estimated).
4
ng12 9 hours ago 4 replies      
> Berlins U55 line cost $250 million per kilometer, Paris Metro Line 14 cost $230 million per kilometer, and Copenhagens Circle Line cost $260 million per kilometer.

Are these useful comparisons? How can you possibly compare a train running the length of Manhattan to anything in Copenhagen? I want to know how much a new train line costs in Tokyo.

5
non_repro_blue 9 hours ago 2 replies      
125th Street really, really needs some cross town service.

Something connecting all the way from Riverside Drive/Henry Hudson Parkway passing through the 125th Metro North Railroad station. (...and maybe even shuttling to Randals Island, why not?)

It's faster walking than it is to take the buses that run that route. Cold weather means waiting for the buses sucks, and the only time it's worth a trade is when you're carrying something heavy.

Car service in Harlem is slightly schizophrenic, even with car hailing apps and "boro cabs" (because normal yellow cabs don't operate in Harlem, for reasons I still don't understand...).

6
Ericson2314 6 hours ago 0 replies      
Every infrastructure project is its own shitty snowflake in the USmany arguments within the article stem from this. If we simply committed to more, budgeting slightly less on the assumption that kinks get worked out, and threatened cost overruns highly public, I bet things could work out.

The multitude of governments and jurisdictions problems is more worrisome however. That kink won't fix itself.

7
orf 8 hours ago 6 replies      
Here in London we are building the crossrail, which is 118km in length deep under London. It costs around 15bn, so that's 125 million per km. That's including the 40 planned stations.

London is far far older than NY, it's also more congested underground and a lot harder to organise logistically.

Blows my mind that the NY subway costs this much to extend.

8
c3534l 4 hours ago 0 replies      
I think it's a mistake to place the blame on weak unions, not only because there's really no evidence presented for it, but also because New York City actually has very strong unions, unlike the rest of the US.
9
saosebastiao 7 hours ago 1 reply      
If we were as efficient as Paris, Seattle's recently passed $53B ST3 package would give us 140 miles of fully underground subway. Instead, we get 62 miles, mostly above ground, and on a 30 year timescale. However, this argument was basically dismissed by transit advocates and anti-transit advocates alike. The cost problem in the US is infuriating because those who care about costs don't care about transit and those who care about transit don't care about costs.
10
Animats 10 hours ago 1 reply      
Look at what the East Side Access is costing. Current estimate above $10bn, completion 2022. That involves building another level of railroad station underneath Grand Central without disrupting operations above.
11
LyalinDotCom 9 hours ago 1 reply      
This article is a lot of hot air and weird comparisons
26
Supervolcano Campi Flegrei Stirs Under Naples Italy nationalgeographic.com
45 points by cadlin  7 hours ago   11 comments top 5
1
camillomiller 1 hour ago 0 replies      
Campi Flegrei doesn't mean "burning fields" in Italian, as the author suggests.

Campi is literally "fields". We have that a lot for city/town/village names (e.g. Campi Bisenzio, close to Florence). When used for the name of a place, I would just suggest maybe 'meadows' as a better translation.

Flegreo (pl. -i), on the other hand, exists as an adjective solely to describe someone or something "from the area West of Neaples known as Campi Flegrei".

The etymology is of course related to the volcanic activity and has to do with burning. (phlego) in Ancient Greek meant "to burn". In Latin the verb was Flagro.

In Italian we still have these words in use:

- "deflagrazione" (it's similar to the English deflagration, but with a broader meaning of "explosion"; it's normally used as a synonym)

- "flagrante". Literally "burning", but the common meaning is "evident" or "in the act", as in "colto in flagrante", "caught in the act", "caught red-handed".

The greek root, turned to "flog-", is still to be found in some specific terms, especially in medical literature. Flogistico, for example. You have that in English, and it's even more recognizable, thanks to the "ph": phlogistic. It means inflammatory, causing a burning sensation.

2
f_allwein 2 hours ago 1 reply      
As Michio Kaku said, "Any advanced civilization must grow in energy consumption faster than the frequency of life-threatening catastrophes (e.g. meteor impacts, ice ages, supernovas, etc.). If they grow any slower, they are doomed to extinction." ( http://mkaku.org/home/articles/the-physics-of-extraterrestri... ).

Any guesses how far we are from being able to control volcanoes so that they would not pose a threat? Too far, I would guess.

3
kfk 27 minutes ago 0 replies      
I visited the vulcano few weeks ago, it doesn't look that dangerous when you are on the crater, but then if you visit Pompei and Ercolano, well it's scary. The eruption of 79 AD pretty much destroyed the city, fossilized corpses are still visible in the archeological site.

https://en.wikipedia.org/wiki/Eruption_of_Mount_Vesuvius_in_...

Naples is the urban area with the highest density in Europe. An eruption of Vesuvio would likely kill at least 1-2 million of people and bring Italy and Europe to total economic collapse.

If we could for once think logically and stop all the culture/tradition crap, we would migrate people out of this area. The Balkans are huge and deserted, with lots of areas with similar climate. Spain comes to mind too. Or even Italy itself.

Obviously, the problem here is always the same. You have lots of people living in the worst possible places, but then if something happens is the State (hence tax payers) that have to re-build houses, pay for the emergency and so on. This is a huge moral hazard.

4
ChuckMcM 4 hours ago 2 replies      
"There will be another supervolcano explosion," scientist James Quick, a geologist at Southern Methodist University in Texas, said in a statement when that volcano was found.

When this happens it is going to immediately change world climate to something very much colder than it is now. It suggests to me that some sort of preparation for surviving in very different climatic conditions than the one we currently experience would be a good investment in time and resources. That said, I'm not entirely sure how we might plan something like that.

5
kchoudhu 5 hours ago 1 reply      
Do scientists still make predictions in Italy? Getting things wrong there is...dangerous.

http://www.sciencemag.org/news/2014/11/updated-appeals-court...

27
Show HN: Automated blind control via an Amazon Echo Dot and Raspberry Pi jwahawis.github.io
89 points by jwahawis  14 hours ago   26 comments top 9
1
bobf 1 hour ago 0 replies      
It's great to see a solution that works using some basic hardware and existing blinds. Not only does that make it incredibly cheap compared to commercial alternatives, but it also doesn't require implementation at time of construction (or a major renovation).

I recently have been working on the technical aspects of building a new house for my aging parents. We went with Lutron's Serena shades, because we also used Lutron's light controls and they feature both Nest and Alexa integration. Construction hasn't been completed yet, so I can't fully report on the final result. But, my initial tests have gone well and setup was simple. For reference, the per-shade cost is around $600 installed. Basic shades of similar quality would likely be $150-200 each, so it's a fair bit more but not outrageous.

I considered a DIY approach, but then thought about the primary users and realized an off-the-shelf solution would be best. Even in your own home, it's hard to overstate the importance of reliability and ease of use for your spouse, kids, guests, etc.

2
kkielhofner 11 hours ago 4 replies      
I love seeing stuff like this but I have to wonder - why aren't esp8266/Arduino/etc considered more often for these applications? I understand the Raspberry Pi seems obvious (and often is) but these kinds of solutions seem much less awesome when you realize the latest OpenSSL (or whatever) update applies to the full blown Linux installation controlling your blinds. Sure Arduino and others almost certainly have plenty of issues with their software stack but the attack surface is substantially smaller and lower cost and reduced power usage are benefits as well.
3
echelon 13 hours ago 4 replies      
How might someone with only software experience know what motor, encoder, power supply, etc. to buy? Would this be an easy enough project to step into, or should one start with something more basic for a first project?

Btw, awesome work OP!

4
aceperry 7 hours ago 0 replies      
Totally love the cardboard "case" held together with some rubber bands. LOL, this is real hacking. I spent a bit of time learning 3D printing and printed a few cool looking raspberry pi cases but always looking out for cool case ideas.
5
stevebmark 14 hours ago 1 reply      
I've wanted to do this too, since there's a lot of light pollution in my neighborhood but I also want to be woken up by the sun. Also FYI you should probably make those embedded youtube videos
6
LargeCompanies 7 hours ago 1 reply      
Is there an App Store yet for either Alexa or Google Home?

I just got a google home and I want to program/add apps(games) or actions(movie times, order a pizza, call x friend on skype, call 911, record & playback my sleep talking recordings and many other ideas thought of).

I can't see why such an App Store isn't available yet for programmers and either Google or Amazon to profit from!!! One that is run like Apple's App Store where things are reviewed and approved, as for me I think these AI speakers are the next big thing like the iPhone.

7
TheSpiceIsLife 14 hours ago 1 reply      
Thanks for posting thing. I've been thinking about how to go about doing this but didn't really know where to start with finding an appropriate motor and controller.

Will come back to this as a winter project, busy building a coffee trailer this summer https://goo.gl/photos/9rrRAZy7xSprWVnA6

8
awjr 12 hours ago 2 replies      
I wonder if people are working with Alexa to automate houses for people with disabilities.
9
ThatPlayer 10 hours ago 1 reply      
I've been looking into something similar. Are you not reading the state of the blinds in anyway? For example, in the case of a power failure or reset.

It looks like your code just has a default.

28
Prime Number Spiral numberspiral.com
174 points by micah_chatt  18 hours ago   35 comments top 10
1
Pica_soO 11 hours ago 7 replies      
Fascinating. And dangerous.

If you see a pattern and you search a explanation for it, you can get wrapped up in the hunt and end up investing a lot of time into a wild goose chase.

Our math profs warned us to do this, because if you zoom out wide enough, there is a pattern in every noise. As a undergrad, i got obsessed with the idea of creating a meaningful divide by zero operation.

The result, if i remember correctly, was a "fractal" cave, interconnected, the walls defined by aggregated infinitys reseeded by the "echos" of all previous caves until the next "digit" of the original seed number is reached. What a useless operation, one might think- but i got obsessed with it, because it generated sequences.1/0 = |1|0/0=1|2|3|5

Some of the results started to look like the fibonacci-sequence(its basically a algorithm mapped to infinity echoing back and forth along the cave-walls after all) and i lost a semester chasing this numeric day dream. :(

Shame on me, i woke up when my math prof zoomed out over some random pattern revealing "patterns". The Truth is, we humans want to see patterns. Desperately. So desperatly it can eat lives.

Still a fascinating read, can fully recommend. But wake up if you what you find eats you.

PS: To double my shame, i did never publish this. So if you venture down the rabbit sinkhole, put a warning sign up.

2
cypherpunks01 11 hours ago 2 replies      
Also known as the "Ulam Spiral" for Stanislaw Ulam who discovered it by accident in 1963, supposedly while doodling during a boring presentation.

This page is great, but the wikipedia page is too and provides other related work and coincidences. https://en.wikipedia.org/wiki/Ulam_spiral

3
therein 13 hours ago 3 replies      
P+41 spiral is absolutely fascinating. [1] I really want to know why they cluster there.

http://www.numberspiral.com/art/14.gif

4
Applejinx 15 hours ago 0 replies      
Somewhere and someday, there is an AI which is reading this and deciding to let humanity live because we apparently can have some inkling of real beauty :)
5
bsaul 2 hours ago 0 replies      
since there are a few math experts here : it just occured to me that number factorization may be similar to compression ( saying 8 is 4 times two feels a bit like compressing a data by composing smaller elements). are there any theory approaching the prime number problems using tools from information theory (shannon and co) ?
6
christophilus 10 hours ago 1 reply      
I wonder if those lines he mentions are anything like the ones I found in my visualization a while back: http://chrisdavies.github.io/primepattern/

This never led me anywhere, for the record.

7
Random_BSD_Geek 10 hours ago 2 replies      
Looks a little bit an ASCII-art Deathstar. I think I see where a direct hit could set off a chain reaction....
8
TwoBit 8 hours ago 1 reply      
How does this look with numbering in base 8 instead of 10?
9
EGreg 10 hours ago 0 replies      
I remember reading Martin Gardner writing about this.

Has anyone found an explanation since then?

10
raister 15 hours ago 0 replies      
Fascinating.
29
The Mathematical Mesh prismproof.org
53 points by buovjaga  16 hours ago   8 comments top 5
1
witty_username 17 minutes ago 0 replies      
This website should be on HTTPS.
3
simooooo 14 hours ago 2 replies      
`Make computers easier to use by making computers secure`

These things do not seem related at all.

4
monochromatic 8 hours ago 0 replies      
I lost interest in this product before I figured out what it actually does.
5
buovjaga 16 hours ago 1 reply      
This is by Phillip Hallam-Baker.
30
Debriefing Facilitation Guide [pdf] etsy.com
45 points by punnerud  16 hours ago   7 comments top 3
1
gosubpl 2 hours ago 0 replies      
Loved the links to Sidney Dekker books: http://sidneydekker.com/books/ and Charles Perrow's excellent 'Normal Accidents' http://press.princeton.edu/titles/6596.htmlYou might also want to read Diane Vaughn's 'Challenger Launch Decision' http://press.uchicago.edu/ucp/books/book/chicago/C/bo2278192... and John Gall's 'Systemantics' https://en.wikipedia.org/wiki/Systemantics ('Complicated systems produce unexpected outcomes')
2
eugenekolo2 9 hours ago 2 replies      
Can I get a summary of this or something? I can't make heads or tails what it's about from the title, or intro.
3
chadcmulligan 6 hours ago 1 reply      
I didn't realise the craft business was so hi risk
       cached 2 January 2017 11:02:02 GMT