And his minimalist approach to company building is epic and I am a fan.
But it's sort of disingenuous to play David Vs. Goliath here like his line about launching:
"I figured only a small number of smart people would somehow find us among the masses."
He's just like the rest of us - just build a great product and be lean guys!
But oh wait he launched DistroKid on the back of the HUNDREDS OF THOUSANDS of registered musicians already using his other site - Fandalism - founded years before.(1)
EDIT - why I am a fan of pud is in the children to this comment he brings a data-gun to a knife fight and sorts out my impression w some facts.
Every one man success story on HN was largely possible due to existing audience from another entity that took years to grow
It's pretty disheartening once you read the article and it boils down to this very requirement: You Need An Audience First.
I'd like more insights and details in how lone entrepreneurs of HN were able to grow their audience.
Because when Step 1 is have a big enough audience the rest of the steps are pretty much a no brainer. It's this step that most of us are struggling with.
I've spent years in seclusion writing a SaaS that ultimately resulted less than the annual wage of a North Korean factory manager and it was rough both physically and mentally but it all boiled down to the fact that VC funded startups were able to do huge PR and buy ads, write blog posts, increase Adwords bid prices etc while I was not.
So yeah, you can succeed with a one-man biz outfit but not without an audience. It makes sense why Linkedin is valuable, why Pinterest, Whatsapp, Oculus, Twitch gets bought out even without revenues.
Steps to reproduce:
1. Identify a poorly served professional segment where the business processes are not the primary task,
2. Automate their processes with vision and passion and empathy for the customer,
3. Make fans for life. Maybe also profit.
Minimum requirements to execute: at least one seed customer; wifi; coffee; time.
Rock on pud.
I have a feeling that this is the dream of a lot of people on HN. Having an easy to run business with a couple of people, without crazy competition, not needing to go sell it to VCs, and making a good amount of money while running it. Not judging, just observing.
All I was able to find were murky statements about 'dozen mayor services' and "150+ others" . That page links "MediaNet Customers" page [0a] that displays 24 logos and links [0b] that displays list of 286 "MediaNet Content Partners". Is that it?
I was particularly interested about Bandcamp and all I've found that DistroKid mentions Bandcamp  and Bandcamp mentions DistrokKid  but no proof they really talk to each other.
 https://distrokid.desk.com/customer/portal/articles/1276117-... [0a] http://www.mndigital.com/about-us/customers.html[0b] http://www.mndigital.com/about-us/content-partners.html https://distrokid.desk.com/customer/portal/articles/1601235 https://bandcamp.com/help/selling
Why give away your hand like this?
There are many people who want to know about your tech stack. I think the thinking goes: with one developer, it must be an insanely productive stack.
is old, do you still use the same set up?
> beating VC-backed startups
> DistroKid intentionally has a small team and no investors. Were here to make the world a better place for musiciansnot to make billions from them. Wed make ludicrously more money if we charged what the other distributors do.
It sounds like you're coexisting quite nicely with different objectives.
This is really cool, and love hearing about stuff like this. I love the idea of running a small team and scaling a product which doesn't have a huge overhead to make something which beats out the current market by just doing a few things better. Hats off to you man. Great stuff!
Click-bait title in my opinion. It's not because you don't need VCs to satisfy your customers that VCs are bad. They are good when you need them to satisfy your customers. Now distrokid-competitors' problem is not that they have VCs money, it's that they "hired" bad ones. Now, I grant you, good VCs for startups, those who gonna understand whats good and not for your customers may be rare... But look at google and facebook, they did right by taking VCs' money don't you think.
This basically would be the same as all the freemium companies out there that offer everything they do for free for a while till they build their product enough to start charging for it later.
My favorite line is in the original DistroKid TC article where @Pud says the goal wasn't to make money with this but to get more people to the social network.
> Because the alumni network is so large and tightly knit, investors or companies who try to maltreat a YC-funded startup can usually be made to stop. [--> footnote] Except for the record labels, which are effectively a rogue state with nuclear weapons. There is nothing we or anyone else can do to protect you from them, except warn you not to start startups that touch label music. 
Your service does various tasks such as cover song licensing which are record label-facing. What has that been like?
And, congratulations on getting engaged.
Anyone know if there is an equivalent for independent films?
All but one of his companies (the one unsuccessful one) were powered by CFML.
 "52 Ways to Screw an Artist, by Warner Bros. Records" https://news.ycombinator.com/item?id=13648245
 "Is Bandcamp the Holy Grail of Online Record Stores?"https://news.ycombinator.com/item?id=12324350
It would be really slick if DistroKid had some kind of conversion process for CDBaby / Tunecore customers.
How do you handle support?
How are you able to find infringing music?
I also think it may be disingenuous to suggest this can or should work for every type of business. While I happen to agree that services that simply do a pipeline, aggregation, or intermediary service are A) not something I feel VCs should usually spend their money on in great quantities if at all and B) are often the most ripe for disruption, I disagree that finding scaled efficiency for all types of businessses in this manner is possible.
I do however think it's wonderful you are promoting the more traditional idea of a business with this product at least which is the more canonical bootstrapping or self funding/ side job till you make it/ type thing that doesn't need tons of employees to be a good value for those that are a part of that business.
That's just my opinions though I'm glad it's been a success
Just curious, what % of artists gross over $19.99 a year?
Stories like this hearken to people like Mark Zuckerberg and Instagram's Kevin Systrom and Mike Krieger.
"The blockbuster effect has been even more striking on the digital platforms that were supposed to demonstrate the benefits of the long tail. On iTunes or Amazon, the marginal cost of stocking another item is essentially zero, so supply has grown. But the rewards of this model have become increasingly skewed towards the hits. Anita Elberse, of the Harvard Business School, working with data from Nielsen, notes that in 2007, 91% of the 3.9m different music tracks sold in America notched up fewer than 100 sales, and 24% only one each. Just 36 best-selling tracks accounted for 7% of all sales. By last year the tail had become yet longer but even thinner: of 8.7m different tracks that sold at least one copy, 96% sold fewer than 100 copies and 40%3.5m songswere purchased just once. And that does not include the many songs on offer that have never sold a single copy. Spotify said in 2013 that of its 20m-strong song catalogue at the time, 80% had been playedin other words, the remaining 4m songs had generated no interest at all."
If only projects I / we had been working on for 2 years became so successful!
Great product. Great support. Great example.
Disincentives of the startup model at work!
These days it feels like you if your startup doesn't have polarized outcomes you're "doing it wrong".
In any event, I wish you well, live long and prosper
Wow. I would like to understand more about this.
Really interested to know about your stack and your deployment methodology. Did you build a bit/async infrastructure because of scale issues...Or is it inherently a mental model that is easy to grok easily?
And for those wondering, this is why Oracle wants billions of dollars from Google for "Java Copyright Infringement" because the only growth market for Oracle right now is their hosted database service, and whoops Google has a better one now.
It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!
How:1)Hardware - Gobs and Gobs of Hardware and SRE experience
"Spanner is not running over the public Internet in fact, every Spanner packet flows only over Google-controlled routers and links (excluding any edge links to remote clients). Furthermore, each data center typically has at least three independent fibers connecting it to the private global network, thus ensuring path diversity for every pair of data centers. Similarly, there is redundancy of equipment and paths within a datacenter. Thus normally catastrophic events, such as cut fiber lines, do not lead to partitions or to outages."
2) Ninja 2PC
"Spanner uses two-phase commit (2PC) and strict two-phase locking to ensure isolation and strong consistency. 2PC has been called the anti-availability protocol [Hel16] because all members must be up for it to work. Spanner mitigates this by having each member be a Paxos group, thus ensuring each 2PC member is highly available even if some of its Paxos participants are down."
Google prefers building advanced systems that let you do things "the old way" but making them horizontally scalable.
Amazon prefers to acknowledge that network partitions exist and try to get you to do things "the new way" that deals with that failure case in the software instead of trying to hide it.
I'm not saying either system is better than the other, but doing it Google's way is certainly easier for Enterprises that want to make the move, and why Amazon is starting to break with tradition and release products that let you do things "the old way" while hiding the details in an abstraction.
I've always said that Google is technically better than AWS, but no one will ever know because they don't have a strong sales team to go and show people.
This release only solidifies that point.
1. Defining high availability in terms of how a system is used: "In turn, the real litmus test is whether or not users (that want their own service to be highly available) write the code to handle outage exceptions: if they havent written that code, then they are assuming high availability. Based on a large number of internal users of Spanner, we know that they assume Spanner ishighly available."
2. Ensuring that people don't become too dependent on high availability: "Starting in 2009, due to excess availability, Chubbys Site Reliability Engineers (SREs) started forcing periodic outages to ensure we continue to understand dependencies and the impact of Chubby failures."
I think 2 is really interesting. Netflix has Chaos Monkey to help address this (https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey). There's also a book called Foolproof (https://www.theguardian.com/books/2015/oct/12/foolproof-greg...) which talks about how perceived safety can lead to bigger disasters in lots of different areas: finance, driving, natural disasters, etc.
Global Spanner looks like a different beast, though. It looks like Google has configured a database for master-master(-master?) replication, across regions and even continents. They seem to be pulling it off by running only their own fiber, each master being a paxos cluster itself, GPS, atomic clocks and lot of other whiz-bangery.
for anyone interested
When Google announced Spanner back in 2012, I'm sure Amazon and Microsoft started teams to reproduce their own versions.
Spanner is not just software. The private network reduces partitions. GPS and atomic clocks for every machine help synchronize time globally. There won't be a Hadoop equivalent for Spanner, unless it includes the hardware spec.
If he was alive, he could say these computers are Google, Apple, Microsoft, Amazon and Facebook.
With Aurora the basic instance is $48/month and they recommend at least two in separate zones for availability, so it's about $96/month minimum. Storage is $.10/GB and IO is $.20 per million requests. Data transfer starts at $.09/GB and the first GB is free.
Spanner is a minimum of $650/mo (6X the Aurora minimum), storage is $.30/GB (3X), and data transfer starts at $.12/GB (1.3X).
Of course with Aurora you have to pick your instance size and bigger faster instances will cost more. Also there's the matter of multi-region replication, although it appears that aspect of Spanner is not priced out yet. So maybe as you scale the gap narrows, but it's interesting to price out the entry point for startups.
This sounds too good to be true. But it's Google, so maybe not. Time to start reading whitepapers...
"You are charged each hour for the maximum number of nodes that exist during that hour."
We've been educated by Google to consider per-minute, per-instance/node billing normal - and presumably all the arguments about why this is the right, pro-customer way to price GCE apply equally to Cloud Spanner.
The idea is that the A-or-C choice in CAP only applies during network partitions, so it's not sufficient to describe a distributed system as either CP or AP. When the network is fine, the choice is between low latency and consistency.
In the case of Spanner, it chooses consistency over availability during network partitions, and consistency over low latency in the absence of partitions.
How is this possible across data centres? Does it send data everywhere at once?
Seems too good to be true of course but if it works and scales it might be worthwhile just not having to worry about your database scaling? Still I don't believe it ;-)
EDIT: further info...
> Spanner mitigates this by having each member be a Paxos group, thus ensuring each 2PC member is highly available even if some of its Paxos participants are down. Data is divided into groups that form the basic unit of placement and replication.
So it's SQL with Paxos that presumably never get's confused but during a partition will presumably not be consistent.
This is the best weasel PR language I have seen in a long time.
Note that the sentence does not actually proclaim that they solved (the previously "unsolvable") problem of achieving distributed consensus with unreliable communication while maintaining partition tolerance and availability.
The blog only says they don't "violate" the CAP theorem -- whatever that means. So the statement is technically correct. Still the intention is obviously to mislead the casual reader (why else would you start the sentence with "Remarkably"?).
A litmus test: The same statement is true for MySQL - or _any other_ database in fact:
>> "Remarkably, MySQL achieves this combination of features without violating the CAP theorem"
>> "Remarkably, MySQL is not a perpetuum mobile"
An example is the rows you get back from a query like "select * from T where x=a" can't be part of a RW transaction. I believe because they don't have the time-stamp associated with them. So, you have to re-read those rows via primary key inside a RW transaction to update them. This can be a surprise if you are coming from a traditional RDBMS background. If you are think about porting your app from MySQL/PostgreSQL to Spanner, it will be more than just updating query syntax.
Disclaimer: I used F1 (built on top of Spanner, https://research.google.com/pubs/pub41344.html) few years ago.
If you use 2 nodes/hour, Cost = (20.9) 24 * 31 = $1400/month not anointing for storage and network chargers.
Maybe I'm misunderstanding how the pricing works here. Any clarification would be highly welcomed :)
1) How big can all the colocated data for a single primary key get before they don't fit within a split? Can I implement a GMail-like product where all the data for a single user resides within one split?
2) Is there a way to turn off external consistency and fall back to serializability? In return you get better write latencies. This is similar to what CockroachDB provides?
Now he works for Google as an Engineering Manager.
What is a distributed system that is CA? Can you build a distributed system which will never have a partition.
Have they documented the wire protocol? I couldn't find it.
Postgresql ? How does this work for people migrating from traditional SQL databases - typically people use ORM. How would this fit in with, say , Rails or SqlAlchemy ?
Looks like Google forgot to mention one central requirement: latency.
This is a hosted version of Spanner and F1. Since both systems are published, we know a lot about their trade-offs:
Spanner (see OSDI'12 and TODS'13 papers) evolved from the observation that Megastore guarantees - though useful - come at performance penalty that is prohibitive for some applications. Spanner is a multi-version database system that unlike Megastore (the system behind the Google Cloud Datastore) provides general-purpose transactions. The authors argue: We believe it is better to have application programmers deal with performance problems due to overuse of transactions as bottlenecks arise, rather than always coding around the lack of transactions. Spanner automatically groups data into partitions (tablets) that are synchronously replicated across sites via Paxos and stored in Colossus, the successor of the Google File System (GFS). Transactions in Spanner are based on two-phase locking (2PL) and two-phase commits (2PC) executed over the leaders for each partition involved in the transaction. In order for transactions to be serialized according to their global commit times, Spanner introduces TrueTime, an API for high precision timestamps with uncertainty bounds based on atomic clocks and GPS. Each transaction is assigned a commit timestamp from TrueTime and using the uncertainty bounds, the leader can wait until the transaction is guaranteed to be visible at all sites before releasing locks. This also enables efficient read-only transactions that can read a consistent snapshot for a certain timestamp across all data centers without any locking.
F1 (see VLDB'13 paper) builds on Spanner to support SQL-based access for Google's advertising business. To this end, F1 introduces a hierarchical schema based on Protobuf, a rich data encoding format similar to Avro and Thrift. To support both OLTP and OLAP queries, it uses Spanner's abstractions to provide consistent indexing. A lazy protocol for schema changes allows non-blocking schema evolution. Besides pessimistic Spanner transactions, F1 supports optimistic transactions. Each row bears a version timestamp that used at commit time to perform a short-lived pessimistic transaction to validate a transaction's read set. Optimistic transactions in F1 suffer from the abort rate problem of optimistic concurrency control, as the read phase is latency-bound and the commit requires slow, distributed Spanner transactions, increasing the vulnerability window for potential conflicts.
While Spanner and F1 are highly influential system designs, they do come at a cost Google does not tell in its marketing: high latency. Consistent geo-replication is expensive even for single operations. Both optimistic and pessimistic transactions even increase these latencies.
It will be very interesting to see first benchmarks. My guess is that operation latencies will be in the order of 80-120ms and therefore much slower than what can be achieved on database clusters distributed only over local replicas.
That makes $700 per month. Is this the minumum? or can we have 0 node when the lambda is idle ?
I know this is a single system, but I'll still say it. This seems like another step in a scary trend for our internet.
How long until it gets shut down with a month's notice?
This is a bold claim. What do they know about the CAP theorem that I don't?
Separately, (emphasis mine):
> If you have a MySQL or PostgreSQL system that's bursting at the seams, or are struggling with hand-rolled transactions on top of an eventually-consistent database, Cloud Spanner could be the solution you're looking for. Visit the Cloud Spanner page to learn more and get started building applications on our next-generation database service.
From the rest of the article it seems like the wire protocol for accessing it is MySQL. I wonder if they mean to add a PostgreSQL compatibility layer at some point.
In any case this is much better than Amazon's offerings... when they actually ship it. :)
Software is about separating concerns, and decentralizing authority. Responsible engineers shouldn't be using this service.
Upd: Downvoting this warning will only increase that number.
It's somewhat ironic that Brewer, the original author of the CAP theorem, is making this sort of marketing-led bending of the CAP theorem terminology. I think what he really should be saying is something in more nuanced language like this: https://martin.kleppmann.com/2015/05/11/please-stop-calling-...
But perhaps Google's marketing department needed something in the more popular "CP or AP?" terminology. I don't see what would be wrong with "CP with extremely high availability" though.
It's certainly wacky to be claiming that a system is "CA", since as the post admits it's technically false; to me this makes it clear that CP vs. AP (vs. CA now?) does not convey enough information. I'd prefer "a linearizably-consistent data store, with ACID semantics, with a 99.999% uptime SLA". Not as snappy as "CA" (I will never have a career in marketing I suppose), but it makes the technical claims more clear.
The first and the second quote do not mean the same thing, not even close.
"A massive 60%t of the plastic waste in the oceans is said to have come from India, according to the Times of India."
The TOI reads - "Banning disposable plastic is a huge step for the capital and the country because India is among the top four biggest plastic polluters in the world, responsible for around 60% of the 8.8 million tons of plastic that is dumped into the worlds oceans every year."
As an Indian, I see a lot of journalists stuck in a colonial era. They go out of their way to tarnish and stereotype the great unwashed. They manage to turn even positive news to mock and heckle the less developed world.
But this article has taken it to great heights. The TOI isn't exactly known for journalistic integrity and often conveniently pulls statistics from their backside. But to misquote the devil, this article has certainly hit the lowest level.
The tax could cover the cost to clean up the litter. That would create jobs in three ways: 1) plastic clean-up jobs, 2) businesses and economic activity that desperately need disposable plastic can still possibly survive, and ) jobs making disposable plastic.
Anyways, it's a lot better than taxing things we all agree we want more of. Like jobs.
Air pollution is huge right now. And sad to say, people pooping in streets and rivers is still a major problem.
To me, plastic remnants are a very minor issue in comparison.
How are they going to enforce a rule regarding plastic bags?
The rich will continue to do whatever they want.
The middle-classes will continue to do whatever they can get away with.
The poor will continue to be shit on and abused.
About 20 years ago they banned smoking in public in Delhi (I was there when they did it).
All that this ordinance did was to give the police yet another angle to harass people. More corruption. More bribes.
"The ban took effect on the first day of 2017."
What are the vendors doing? Is water being sold in glass bottles with a deposit scheme for redemption now?
In Himachal Pradesh, the plastic bag ban had resulted in a cottage industry forming where discarded newspapers were folded/glued into shopping bags. I'd like to see this same thing happen in the US. A friend imported a palette of these bags to Florida, and he was able to sell them to vendors and make a small profit. This tells me they might be viable here commercially.
As they say, reduce > re-use > recycle.
Edit:They should ban the burning of plastic, not plastic itself. And enforce it.
It absolutely amazing how many more groceries you can put in a reusable bag over a plastic or paper bags. So at the bare minimum it is an optimization (less trips from car to kitchen).
When I bring this up with people I get unbelievable false rationalization like: I reuse the bags for trash or the reusable bags take 100x to make over plastic bags.
Plastic grocery bags have knack of flying into lakes, rivers and streams. I have saved many turtles and fish a long the Charles river that are caught in these bags. I have never seen a kitchen trash bag or a reusable bag in the river.
Many grocery stores even give a discount if you use reusable bags not to mention reusable bags are extremely cheap (I don't think I have ever paid more than $2).
Nothing practically happens because:
a. Police has better things to do than round up people and shops carrying plastic bags. They may probably take a bribe from the shop to turn a blind eye.
b. There's no really low cost and convenient alternative in many cases - in spite of a lot of shops in India using recycled newspapers for packaging.
Home delivery which is widespread among the richer classes is at least partially helpful since shops bring things in their own bags. However most of India is not rich
I am very supportive of these types programs, even if hard to enforce.
They need to have anti-litter regulation, awareness campaigns, and enforcement.
No so many years ago paper bags were common and what I used. Paper is great, it's biodegradable, renewable, and convenient.
There are many other priorities to focus on which can have a far bigger impact.
What about all the plastic containers the food originally came in?
Also the title says "literally all disposable plastic" then in the article it says applies to cups, bags, cutlery.
"Delhi has banned disposable plastic"
Not all of India. Just Delhi.
If you leak a password to any public location, there is only one reasonable course of action: CHANGE IT!
Don't even bother rewriting the commit. Focus on changing that password right away, and while you're at it, figure out a better way to manage your secrets outside of your source code in the future. Mistakes happen, but they shouldn't be repeated.
"... It's not really removing any password, is it? But hey, why not use themomentum ... wheeeeeeeeeeeeeeeeee!"
There are just so many of those it's crazy:
remove .env YOURFAVORITEAPI_SECRETKEY YOURFAVORITEAPI_PASSWORD
And replace "YOURFAVORITEAPI" with CircleCI, Travis, Mailchimp, Trello, Stripe, etc, etc.
Also, companies I contacted consider it the customer fault and basically don't care.
It gets a little scary when it veers from professional security to individual personal privacy https://github.com/search?p=2&q=smtp.gmail.com+pass&ref=sear...
add password / add passwords
* https://github.com/search?utf8=%E2%9C%93&q=add+passwords&typ...* https://github.com/search?utf8=%E2%9C%93&q=add+password&type...
add secret / add secrets
* https://github.com/search?utf8=%E2%9C%93&q=add+secret&type=C...* https://github.com/search?utf8=%E2%9C%93&q=add+secrets&type=...
That is, for example, if Gmail can ask "it looks like you forgot the attachment" why can't Git say "this is a public repo and you're about to commit and push passwords. Are you sure?"
It's going to be easier to fix the tool than it is to make humans be perfect.
On an internal VCS, this would still be a problem, but a bit less visible/exploitable...
The other alternative I can think of is to hide sensitive values in environment variables
"Add password" finds 792,000 results, of which at least some (on the first page) are actual passwords.
By just looking quickly, it seems that you can still find many recent live keys...
I get it...you like github but you don't want to pay for private repos. That's when you use Gitlab or BitBucket and then this problem goes away.
Already had a couple of sassy individuals telling me my honeypot is shit via the tty logging.
Reminds me of the eye opening experience available at https://www.exploit-db.com/google-hacking-database/
Sure people should clean up their work, but as a fact not everybody does and it won't change tomorrow.You'll simply hear on the news: some Russian hacker are behind the attack or another bad excuse
Here I was trying to search for "remove password" just on repos for nicksagona (just happened to be one of the first users to display when you go to this thread's search).
That comes up with zero results. This leaves me wondering how I would run similar searches on repos that I'm involved with as a way of auditing to make sure none of them have compromised passwords that would need changed.
I would love to hear suggestions on how to do this.
I use either `git-crypt`  or `ansible-vault` .
One can do better by adding random lines/ logs in lot of files and sneakily remove password from one of them and then give a random commit name.
But then it all boils down to your mindset at that particular moment when you are commuting.
Github, like SSH, uses an asymmetric authentication scheme. They even publish everyone's public keys. It's much more secure than passwords.
jesus christ, how low can you stoop
This could be a big thing. It's time to write: How to write code without expose you
Same thing but guaranteed to be up to date and (more) complete.
For example, htmlreference.io's page for <input> doesn't mention the autocomplete attribute. MDN lists all its possible values.
> To save, press <kbd>Ctrl + S</kbd>.
But the spec (both W3C and WHATWG) suggests that individual keys should be nested inside an outer <kbd> tag: "When the kbd element is nested inside another kbd element, it represents an actual key or other single unit of input as appropriate for the input mechanism."
Thus, the example should be:
> To save, press <kbd><kbd>Ctrl</kbd> + <kbd>S</kbd></kbd>.
On the face of it, this seems ridiculous. It's too verbose, the tag name is misleading, and if you actually use the correct markup on GitHub or StackOverflow, it will render incorrectly because both sites assume the standalone <kbd> element represents physical keyboard buttons.
On the other hand, what's the value in semantic markup if we don't adhere to its semantics?
Practically speaking, I would be a happier person today if I hadn't read that part of the spec, and instead persisted on in blissful ignorance of the element's intended semantics. Thanks, specs.
Call me paranoid, but I see this diverging from actual specs, then people googling for "html reference" finding it and thinking it is something official. The result would be another W3Schools disaster.
In my opinion, the official W3 specification pages are not that bad, and alternatively there's the simpler MDN with strong community support (thus lower risk of deprecation).
For example the description for <li> is:
Defines a list item within an ordered list <ol> or unordered list <ul>.
Two small pieces of feedback. HTTPS support would be good. Also, when I scroll the list of element and then click on one, I'm taken back to the top of the list - I'd like to stay where I am.
1. http://htmlreference.io/element/canvas/2. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ca...
I know a lot of people big up MDN vs W3Schools and all their arguments are basically correct but i find it (mdn) really ugly and hard to read. i often find myself going to w3schools to just copy a snippet or get a 1 line description of what i need.
mdn often feels more like a technical spec as opposed to a guide, which it kind of is i suppose.
great work on the layout!
> A p element's end tag may be omitted if the p element is immediately followed by an address, article, aside, blockquote, div, dl, fieldset, footer, form, h1, h2, h3, h4, h5, h6, header, hgroup, hr, main, nav, ol, p, pre, section, table, or ul, element, or if there is no more content in the parent element and the parent element is not an a element.
Links to the relevant parts of the official spec would be nice too, e.g. https://www.w3.org/TR/html5/grouping-content.html#the-p-elem...
I really find this format super easy to read, even if there may be slight inconsistencies and nuances to reading it.
However, the code examples don't look very nice on my box due to the font which comes out as Nimbus Mono L. How about you link to an actual font? (I like Ubuntu Mono, though there are lots of other good programming fonts).
Also with respect to code samples, would black text on a white background be a possibuility?
Is it design completely custom or did you use a template or theme? I'm really struggling with the design side of my side projects.
Also, if you don't mind me asking, how difficult was it to get approved for Carbon Ads?
I appreciate that the website works well even with all scripts blocked.
One minor issue: Your search bar doesn't look like one :)I didn't notice it until my second visit.
- nice brogramming;- not up to date;- not refering the spec (sorry this is important when you are in trouble);- well read @shpx comment he is right
Verdict: more signal less noise policiy says, don't click to the bait.
normally I put a goatse.cx link here or a rickroll
There are quite a few breaking changes but there is a very helpful conversion script here: https://github.com/tensorflow/tensorflow/tree/r1.0/tensorflo....
You can find the breaking changes in the 1.0 release here: https://github.com/tensorflow/tensorflow/releases/tag/v1.0.0
I wish AMD graphics cards were supported fully. I really think AMD should find a way to work with the Tensor Flow team on this...
Uh, nope, that was speedup on 64 GPUs (or CPU cores, can't remember). i.e. it scales linearly, something that TF hasn't always been good at v other frameworks. I'm amazed a journalist with (I assume) basic technical competence could make this mistake.
You can follow the Summit live here: https://www.youtube.com/watch?v=LqLyrl-agOw
I have a couple of applications in mind, mostly time series predictions. But the machine learning field seems to be vast and I don't know where to start.
"a post-API programming model"
But pressing on how somehow manages to blame the lack of updates to android phones on the modularity of the Linux kernel. The joke of course being that linux is monolithic and googles new OS is a microkernel ergo more modular.
The quote is "...however. I also have to imagine the Android update problem (a symptom of Linuxs modularity) will at last be solved by Andromeda"
Its hilarious that he can somehow defying all sanity ascribe androids update issue to an imagined defect in Linux. Android phones don't get updated because for the manufacturers ensuring their pile of hacks works with a newer version of android would represent a non trivial amount of work for the oem whom already has your money. The only way they can get more of your money is to sell you a new phone which they hope to do between 1-2 years from now.
In short offering an update for your current hardware would simultaneously annoy some users who fear change, add little to those who plan to upgrade to a new model anyway, decrease the chance that a minority would upgrade, and cost them money to implement.
Its not merely not a flaw in the underlying linux kernel its not a technical issue at all.
Google has already done 90% of the necessary work by adding Android apps to ChromeOS. Two and a half years ago it created "App Runtime for Chrome" which demonstrated that Android apps could run on Windows and Mac in a limited, buggy way . If Google had put meaningful effort into developing such a strategy we would by now have a relatively simple way to develop software which runs on 99% of laptops and 85% of smartphones and tablets. Developers would now be targeting 'Android first' instead of 'web app first then iOS then maybe Android'.
Sundar, if you're reading this - do it!
Unfortunately, the hard part of an operating system isn't in a cool API and a rendering demo. It's in integrating the fickle whims of myriad hardware devices with amazingly high expectations of reliability and performance consistency under diverse workloads. People don't like dropped frames when they plug in USB :) Writing device drivers for demanding hardware is much harder than saving registers and switching process context. The Linux kernel has an incredible agglomeration of years of effort and experience behind it - and the social ability to scale to support diverse contributors with different agendas.
Microsoft, with its dominant position on the desktop, famously changed the 'preferred' APIs for UI development on a regular cadence. Only Microsoft applications kept up and looked up to date. Now Google has such a commanding share of the phone market - Android is over 80% and growing http://www.idc.com/promo/smartphone-market-share/os - they have a huge temptation to follow suit. Each time that Microsoft introduced a new technology (e.g. https://en.wikipedia.org/wiki/Windows_Presentation_Foundatio... WPF) they had to skirt a fine line between making it simple and making sure that it would be hard for competitors to produce emulation layers for. Otherwise, you could run those apps on your Mac :)
There are many things to improve (and simplify) in the Android APIs. It would be delightful to add first class support for C++ and Python, etc. A project this large will be a monster to ship so hopefully we'll soon (a few years) see the main bits integrated into more mainstream platforms like Android/Linux - hopefully without too much ecosystem churn
I don't understand half the decisions outlined in the article.
> I also have to imagine the Android update problem (a symptom of Linuxs modularity)
I seriously doubt the Linux kernel is anything but a minor contributor to Android's update problem. Handset developers make their money by selling physical phones. In two years, your average consumer probably doesn't care if their device is still receiving software updates. They'll jump onto a new phone plan, with a fresh, cool new mobile, with a better screen, newer software (features!), and a refreshed battery.
Maintaining existing software for customers costs handset manufacturers $$$, and disincentives consumers purchasing new phones (their cash cow). The money is probably better spent (from their POV) on new features, and a marketing budget.
Also, considering the way that the ARC runtime for Chromebooks was a failure and had to be replaced by a system that apparently essentially runs Android in a container, will it really be possible for a completely different OS to provide reasonable backward compatibility?
I've been running Android since the Nexus One so I'm no newbie to the platform, but the ease with which iOS manages to get all UI interactions at ~unnoticable FPS and outstanding battery life is staggering when you're used to Android. It feels like some really fundamental choices were made badly on the platform that make it incredibly inconsistent and unreliable. A fresh start would be fantastic.
I get greater than 60 fps with my existing Vive three.js WebVR-ish electron/chromium linux stack. Even on an old laptop with integrated graphics (for very simple scenes). Recent chromium claims 90 fps WebVR, and I've no reason to doubt it. So 60 fps "up to 120fps" seems completely plausible, even on mobile.
The status quo right now among android hardware vendors is to violate the GPL, and they have faced few if any repercussions for doing so. I wonder if Fuscia is sort of viewed as the way forward to addressing that.
Anyone care to speculate why there isn't a community version of chromium os? I'd donate to it for sure. It sounds like getting android apps working on it would be pretty easy: https://groups.google.com/a/chromium.org/forum/?hl=en#!topic...
For anyone interested, I intend to write quite often about consumer technology on this blog. Topics will include hardware, software, design, and more. You can follow via RSS or Twitter, and possibly through other platforms soon. Sorry for the self promotion!
Thanks for reading. Please do send any corrections or explanations.
I currently "add to home screen" for most things. I edit my images online, and develop code using cloud9 ide, etc. There are few things I need apps/programs for right now, and that's improving day by day.
iPhone is dropping heavily in world wide market share, but they still have a lot of the wealthy users. There is a non-zero chance they get niched out of prominence by Android (aka every other manufacturer in the world), at which point network effects start encouraging Android-first or Android-only development. There might be a point where Apple needs to double down on the web, and/or maybe kill off apps, like they did flash, to still have the latest "apps".
It's really hard to tell if this is actually something that will ship, or yet another Google boondoggle to be swiftly discarded (like the first attempt at ChromeOS for tablets). Google under Larry Page built and discarded a lot of stuff; I wonder if it's the same under Sundar Pichai.
My favourite IDE to use today is IntelliJ, and I prefer it over my experience with Visual Studio (though to be fair, I did not use VS intensively in the past 3-4 years).
I don't experience IntelliJ as "slow". It launches faster than VS did when I used it, and once it is running I keep it open pretty much the entire work-week without any issues.
Neither Android nor Windows nor Chrome OS nor your favorite Linux distro have ever been able to truly compete with the NeXT legacy as it lives on in Apple.
Google is smart enough as a whole to see this, and so it's not surprising that they're attempting to shore up their platform's competence in this particular area. What IS surprising is that it has taken them this long.
Perhaps what's truly surprising is just how much mileage Apple has gotten out of NeXT. It's astounding, and I know Apple realizes this, but I question whether or not they know how to take the next step, whatever that may be. And if Google manages to finally catch up...
Now instead of improving the linux stack and the gnu stack (the kernel, wayland, the buses, the drivers), they rewrite everything.
They put millions into this. Imagine what could have been done with it on existing software.
They say they are good citizen in the FOSS world, but eventually they just use the label to promote their product. They don't want free software, they want their software, that they control, and let you freely work on it.
How's that going to work? iOS, specifically? Is Dart a supported language?
A bit weird to use the past tense here since it's not reached 1.0 yet. You can try it out today (tech preview) to create apps in Dart that run on Android and iOS:
(Googler, not on the Flutter team itself, but working on related developer tools.)
A company the side of Google, with all its internal politics, doesn't work as a startup. Starting a third operating system project and hoping it to replace two major ones means convincing people inside the company to loose part of their influence. Now it may happen if chrome or android were failing, but they're clearly not.
I use Andromeda equivalently with Fuchsia in this article. Andromeda could refer to combining Android and Chrome OS in general, but it's all the same OS for phones, laptops, etc. - Fuchsia. I make no claims about what the final marketing names will be. Andromeda could simply be the first version of Fuchsia, and conveniently starts with "A." Google could also market the PC OS with a different name than for the mobile OS, or any number of alternatives. I have no idea. We'll see.
Won't the Dart's single thread nature be bad to take advantage of Murli core processors? Or they are embracing web workers?
This is worrying for Apple. I can see the following playing out
- Apple continues releasing machines like the TB MBP, much to exasperated developer dismay.
- Other x86 laptop industrial design and build quality continue to improve.
- Fuchsia/Andromeda itself becomes a compelling development environment
- Developers begin switching away from Mac OS to Fuchsia, Linux and Windows
- Google delivers on the promise of a WORA runtime and the biggest objective reason not to abandon Mac OS, i.e. writing apps for iOS, disappears.
- Apps start to look the same on iOS and Android. iOS becomes less compelling.
- iOS devices sales begin to hurt.
Granted that the App Store submission requires Mac OS (Application Loader) and the license agreement requires you only use Apple software to submit apps to the App Store and not write your own, but it seems flimsy to rely on that.
It would really surprise me if Apple got there first. Tim lacks vision and will keep on milking iOS even if the iPad Pro is a failure as a laptop replacement.
Windows is still king in the desktop space, at least as far a user base goes, but it's terrible on tablets and phones. MS has all the tech in place with UWP, but it's still pretty far in the race in terms of simplicity and usability.
Chrome OS ticks all the right boxes, and is experiencing a huge growth, but it's not universal. If Andromeda is real, and it's able to become a universal OS that merges Chrome OS and Android it might be the best thing since sliced bread.
Also, it would be mind-boggling if they didn't actually fix the update problem this time, and if it wasn't a top 3 priority for the new OS.
"If a programmer writes software for a living, he should better be specialized in one or two problem domains outside of software if he does not want his job taken by domain experts who learn programming in their spare time."
Seems a bizarre sentiment, but after reading this sentence, I feel like I really wanna donate some money to the guy. If he gives a way I will surely do.
I would also like to recommend another free resource that might be a good complement(theory vs implementation) to this:
"Operating Systems: Three Easy Pieces"
available online at:
From the title, I had mistakenly assumed it was about the first OS ever.
Surely: Field Programmable Gate Array (FPGA) ?
Sometimes I have to paste a line or two of code, or a few lines of a stack trace. Sometimes I have to paste a string which contains some particular set of characters. Microsoft Lync absolutely destroys the pasted text. It subtly converts the double quotes into some unicode nonsense. Then it converts some common character sets into smilies. When you copy text from Lync it is almost always guaranteed to be different from what what entered originally. God, I hate Lync with a passion.
With a smart phone that would pretty well via a push notification or an actual call, but not sure how that would work when you want a join a meeting from a physical meeting room with its own AV system. I'm sure there is a way to get that set up.
Minimally it is centralized, and you can't verify that there's no backdoor. In this day and age, that means we're both trusting their core intentions, and also trusting that some government won't step in and silently force their hand. I don't personally feel that is good enough to be considered secure anymore.
This wording has always struck me as being awful. People felt confident investing with Bernie Madoff as well. I'd rather have confidence from proven security instead of just feeling confident.
Edit: there's a problem here. Skype for business allows up to 250 participants. The AT&T solution (webex maybe) allows, I think, an unlimited number. Amazon Chime has a limit of 250 people. This wouldn't cut it for presentations in large companies e.g. announcement of annual results or divisional virtual 'town hall'
> No Linux support
One problem I have with all video conferencing solutions we've tried (same for my colleagues, all Mac or Linux users, sadly no Windows users to compare) is high CPU usage. I have a 2015 MacBook Pro and when I share my screen CPU usage skyrockets to 150-200% basically pegging the whole CPU. Without sharing my screen CPU usage is at 80-100%.
I have similar problems with certain videos on the web (e.g. Ted.com and others).
Is this something everyone else here sees as well? I always assumed they must because we see it across devices and products.
The basic and plus pricing options, while cheap, are practically useless with only 2 maximum attendees and the $15/user/month pro plan is hardly "a third of the cost...".
Looks like a great product with an average price point.
In case you hadn't seen it, this is basically the anti-marketing video for how conference calls actually work: https://www.youtube.com/watch?v=DYu_bGbZiiQ
- Amazon, PRISM partner
Entry level needs video, since you can get it for free elsewhere (i.e. hangouts).
There's long, long way to go for this thing to compete with hangout, zoom, or anything else out there.
Source: I've just tried it out, chatting with myself on native app + 2 browsers.
Why not Linux?
Both hangout and zoom can do Linux, but they aren't seen as corporate as WebEx.
Why. Just... why? Why is this necessary?
"Should we call, or Go to Meeting, or Google Hangout, or Skype, or Lifesize, or Slack, or Adobe Connect, or Zoom, or WebEx, or Chime, or..." It's getting ridiculous.
For new services: Please don't be based in the US or willing to cooperate with the US Government. Remember, "We don't snitch!" is an excellent marketing line -- I'll give you money for that. I don't trust Amazon or any of these at present.
AWS as a more consumer-facing platform probably has a long climb ahead of it but it could be quite helpful for Amazon to differentiate from their many product misses released under the Amazon name.
I know everyone says hangouts is dead and Google isn't putting much work into it. But it does work. And unless they actually shut it down, it gives us what we need. Free. We don't use it for large webinars or anything, and it has its flaws, but... free. That is a really hard point to beat.
Skype seems to not be up to the task. Our Gbit Connection is.
Eg if you want a vendor to join your team's chat or you use it to talk to clients.
Join.me: 50 meeting participants, $22/mo.
That's a personal concern I have with slack.
(has no mention of collaborative white-boarding)
Hard to fathom the number of people out there in the world thankful and who owe their lives to the foundation choosing to help their community or their cause.
With that said and not to diminish the good work. It is sad and eye-opening more progress has not been made on these issues.
I think when you really look at the numbers, besides seeing the positive trends one can also see how truly difficult, large and complex these problems are. The foundation has $50 Billion dollars! And Bill can get pretty much any world leader or other billionaire CEO to take his phone calls. Yet, sorry that is not enough not even close. The foundation has to focus on very specific issues and even then it hasn't "gotten to zero" where it wants to (though Polio is close, down to 37 cases). Private foundations can only do so much, the scale of these problems really requires the cooperation of governments. I'm not sure what can be done on that or what that means, just a bit breathtaking how governments can help people or really screw things up on a scale nothing else even comes close to.
You can find a local NGO or charity that promotes, I dunno, literacy or first-generation college students or refugee settlement, or something. What matters is that they do a good job and you're interested in their mission. You can commit to supporting them with a regular cash gift. That kind of regular gift lets a charity plan their work. (One-off gifts are good too, but they don't have the planning benefit.)
But be careful: they may want you on their board of directors. :-)
It was really uplifting to read this post.
Because of it, the world now looks a little brighter to me.
We're all surrounded by alarmist news every day and often lose sight of the big picture.
PLEASE write a post like this one at least once a year :-)
Whenever someone like that shows up, perhaps we can refer them to this letter.
However, is it fair to credit philanthropy as the sole cause of quality of life improvements across the most destitute populations?
It's a story about the stunning gains the poorest people in the world have made over the last 25 years. This incredible progress has been made possible not only by the generosity of Warren and other philanthropists, the charitable giving of individuals across the world, and the efforts of the poor on their own behalf, but also by the huge contributions made by donor nations, which account for the vast majority of global health and development funding.
If you want to posit that their efforts have been invaluable in improving access to medical services across the globe, you'll get no argument from me. However, that's a small part of what makes up "stunning gains."
It's incredibly heart warming how many people have been saved by the Gates Foundation. Although, the realist in me worries about the possibility that in the long-run, civil liberties and human rights are also of importance to humanity in saving human lives, and human dignity.
Without going into an argument about the relative importance, shouldn't at least some significant portion be distributed towards solving the human and civil rights, liberties, and justice abuses here in America?
After all, the reason Bill Gates and Buffet are in a position to be able to help so many people is due the world they grew up in. The opportunities they were given. A large part of that is individual human liberty. Had Gates lived in another time, and been arbitrarily detained during one of his two arrests for refusing to unlock his electronic devices, would he have encountered a legal battle that derailed his entrepreneurship? If he was a minority, would the police had treated him the same way? If he was abused by the police the way Ian Murdock allegedly was, what would have happened to Gates? Would we have lost the opportunity to save 122 million lives to date through the Gates foundation?
When I see stats like $1 of vaccines releasing $44 of economic value, I think: there is a market opportunity here.
The question I keep asking is: what would a hedge fund look like that was making a bet that most people in poverty could produce the $44 if we invested the $1? What kind of corporate structure would be needed to distribute the resource AND reap part of the economic value?
Similar hedges can be made everywhere. If I believe black people are worth more than their social status (and commensurate credit access) would suggest.... how do I make that bet with cash?
You can answer "microfinance" but that's just phrasing the question in a different way. The real issue is: how, exactly?
It's not an easy question to answer, but I think unlocking it is a trillion dollar opportunity. You are essentially betting against the entire class of employers. Seems stupid, but so did Michael Burry's bet against the entire class of mortgage lenders.
The page doesn't even display if you block some of these scripts. This is terrible.
I hope one day all governments across the world operate with this principle.
Gates foundation is doing some amazing work. Looking forward to their contribution in the coming years.
On the opposite number side, I hope they can help to find ways to reduce the population growth. It is unsustainable, especially in Africa and Arabic countries. (Pity one cannot just implement a China-like one child policy:))
My background is chemistry/chemical engineering. I had applied for a data scientist position. Phone interview included a problem where I was asked about my solution's complexity. I admitted I didn't know about it.
Still got called back for an interview on site, but the weekend before I powered through this course. Unsurprisingly, it came up in the on-site and they were really pleased I had learnt about it. I got the job.
I also found it useful to implement all the algorithms in Python.
It also helps that Robert Sedgewick has been in compsci forever (got hit PHC in 1975) and is one of the subject matter experts in algorithms.
Algorithms unlocked: https://www.amazon.com/Algorithms-Unlocked-Press-Thomas-Corm...
Both include the same author as the one in this article (Thomas Cormen).
It's nearly a third of the length of CLRS, and half of Sedgwick. Much more precise, yet offers more in that it talks about common problem solving uses cases with data structures and algorithms, rather than writing going through the theoretical proofs behind them.
And I'm kind of smirking right now, because again asymptotic got butchered.
I've spent the good part of this semester trying to get my head around a very formal, very dense script of my own algorithms course. And I finally cracked asymptotic. Maybe I'm just dense. But If that's the case, I'm sharing a classroom with others who are equally dense.
We dealt with all 5 classes, big oh, small oh, theta, big omega and little omega. We're required to always give the "most exact" classification for best/avg/worst. Including "does not get as fast as" or "does not get as slow as"
I'm willing to write a "freshman friendly" write up if someone's willing to post it or use it. I'm shit at self publishing.
Pair this up with this excellent lecture by the authors Sedgewick and Wayne: https://www.coursera.org/learn/algorithms-part1/home/welcome
Which is great, except it takes you to the main list of subject matters, and algorithms isn't in there.
So I'm not able to view this link on my iPad unless I uninstall KA?
One of the authors - Professor Balkcom is our advisor as well.
The follow up to that is understanding what you're counting and why, i.e. branches v.s. statements v.s. dereferences v.s. logical I/Os v.s. physical I/Os ...
Not just the everyday examples of what constitutes an algorithm, but the voice, presentation, etc.
Im making an explicit opinion that python is no better than any other language for implementing algorithms. HN please prove me wrong in an objective way so we may all learn?
Linus Torvalds: Well, so this is kind of clich in technology, the whole Tesla versus Edison, where Tesla is seen as the visionary scientist and crazy idea man. And people love Tesla. I mean, there are people who name their companies after him.
The other person there is Edison, who is actually often vilified for being kind of pedestrian and is I mean, his most famous quote is, "Genius is one percent inspiration and 99 percent perspiration." And I'm in the Edison camp, even if people don't always like him. Because if you actually compare the two, Tesla has kind of this mind grab these days, but who actually changed the world? Edison may not have been a nice person, he did a lot of things he was maybe not so intellectual, not so visionary. But I think I'm more of an Edison than a Tesla.
The Tesla vs. Edison narrative is always couched in terms of the idea guy vs. the more pragmatic (perhaps more business-oriented) guy, not unlike the popular Woz vs. Jobs narrative, or the Jobs vs. Gates narrative in the '80s. These are popular narratives and archetypes that reflect the people involved, but can lead people to mythologizing history rather than understanding it.
In a different field, Lennon vs. McCartney.
So, among my peers at least, conventional wisdom is that Tesla was 100% an amazing visionary and got screwed over by unfair forces of history, and Edison was the villain whose contributions are overrated by historians. There's some truth there, but more than anything else it's a historical narrative where people are slotting Tesla and Edison into archetypes.
When a lot of people talk about Tesla vs. Edison, they're really just talking about those archetypes, and revealing to what degree they value inspiration vs. perspiration. I think that's all Linus is doing here, saying that in his mind perspiration is undervalued and inspiration is overvalued among his peers. I don't think he's really trying to make a historical argument, which is what a lot of the commenters here are assuming.
Apple's slogan may have been "think different", and they have the image of being radical innovators, but hardly any of their innovations actually originated with them.Apple is 99% perspiration and 1% stealing good ideas :-)
There's even an infographic : http://mashable.com/2012/10/27/apple-stolen-ideas/#Fs4Q5gSS....
"It's a social project," said Torvalds. "It's about technology and the technology is what makes people able to agree on issues, because ... there's usually a fairly clear right and wrong."
EDIT: Just for context, HN thankfully edited the title. When I wrote this the post was using the article's title: "Talk of tech innovation is bullsh*t. Shut up and get the work done says Linus Torvalds"
- Talk of AI is bullshit. Shut up and get the work done.
- Talk of Machine Learning is bullshit. Shut up and get the work done.
- Talk of VR is bullshit. Shut up and get the work done.
- Talk of Smart Contracts is bullshit. Shut up and get the work done.
- Talk of IoT is bullshit. Shut up and get the work done.
Not sure if I entirely agree with him but there's some truth.
p.s. As a side-snark...Enough already about all these various dev technologies. So they enable still-shitty user experiences? So what. No one says, "Oh. I love they use _____."
Users. Don't. Care.
So please, for the love of God & country, stop stroking yourself with your shiny new (dev technology) object. No one cares. The technology is a means. The experience is the ends. Stop focusing on the wrong problem. Please?
Edit: I'm not saying that he WANTS to be a multi billionaire, but the fact is that he has attained disproportionately less value than he's created. By rights he should be one of the wealthiest people in tech. He might have 150m but that's peanuts given what he's done. The wealth of the guy who made Instagram dwarfs that. The guy who made Whatsapp has a net worth of 8b.
Creation might be 90% perspiration as he says, but perspiration doesn't equal success, and success doesn't equal a career. Obviously everything isn't about money, and Torvald's legacy will be timeless. But if you want to ensure earnings, at some point it's a good idea to sell.
Certainly true. IMHO, innovation is about orientation, while perspiration is about walking. They live in different timescales: GTD takes time while innovation is a spark. However, both are equally important: it would be useless to go forward in a wrong direction, it would be useless to identify a meaningful direction without going forward, and it would be of course useless to walk backward.
An acceptable - and subjective ! - balance is hard to find, these days.
What works for me is a series of plan -> do -> review sequences with about 10 to 15% planning, 80 to 85% doing and 5 to 10% reviewing.
That's where the real work is, in the details. I respect those who walk the talk and he's one among them.
It's not so original and it sounds like it was copied/pasted from an old quote 100 years ago
Also closes with a great quote. Code is easy, it's either right or it's wrong. People are the sticky wicket
> It's almost boring how well our process works," Torvalds said. "All the really stressful times for me have been about process. They haven't been about code. When code doesn't work, that can actually be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems ... That's when people start getting really angry at each other.
Now ask someone in the VR/AR department. Everyday they have to think up is new 'innovative' ideas because they are on the bleeding edge. We know innovation is needed because so far not everything is working.
What about Neural Nets, where there have been a lot of innovations to get from one one 'neuron' to what we now call deep learning. And the list goes on.
"Opportunity is missed by most people because it is dressed in overalls and looks like work."
+1 for that take on innovation.
In the US there is intense pressure right from school to colleges to work to be 'exceptional', and to be recognized and celebrated for it.
There is nothing necessarily wrong, excellence is worth pursuing and to have individuals believe they can achieve it. But there is a huge difference between motivation by passion and interest and motivation by social recognition and celebration.
There are pitfalls and side effects in a society from a toxic focus on 'winners' and 'losers', constant judgement, politics and one upmanship, the ability of people to work together without the need for self congratulation and diminishing the collective. It takes a village and all.
Excellence always comes through, you don't need to do anything special, individuals who are brilliant will always shine in a self evident way without labels or self congratulation via their work, throughout history and now and in the future.
But you can't progress alone, progress comes from a generational interlinked collective, and there is huge risk of diminishing the collective and brushing every other factor under the carpet by an extreme focus on individuals.
We need innovation simply because its fun.
It'll run you through building a twitter clone and introduce you to git, heroku, a bit of CSS/HTML, and even goes into AJAX a bit.
I can't recommend it enough to people looking to get into rails.
In this book you build a (virtualized) computer. It is one of the best books I ever read.
I really liked "Build Your Own Lisp" too. Fun book. :)
Goes through building a Linux system from the ground up, and gives a pretty thorough overview of why everything is working the way that it is.
An extremely thorough guide to ray tracing.
I don't know if it's one of the best, but it teaches Docker concepts with a single project, and as you progress through chapters, you will find different ways you can deploy applications using Docker containers.
I find this approach works well because you don't have to ask "why did he make that design decision" but instead I intentionally make common mistakes a beginner would make, wait until they become an issue, and then I demonstrate how we can fix that issue. As a result you really get to understand not only how to create a web app in Go but also why developers tend to follow different design patterns.
I said this in another comment, but it is based on Michael Hartl's Rails Tutorial. I think showing someone how to go from nothing to a full app is a great way to help them get into web development without the frustration that comes from piecing together blog posts/docs/trial&error.
If you are interested in Go I'd love to get your feedback :) and if it isn't obvious, I am the author of the book.
I bet this is what you are looking for:
This book taught me how to write a compiler.
Here is its description from its website:
* Comprehensive treatment of compiler construction.
* JavaCC and Yacc coverage optional.
* Entire book is Java oriented.
* Powerful software package available to students that tests and evaluates their compilers.
* Fully defines many projects so students can learn how to put the theory into practice.
* Includes supplements on theory so that the book can be used in a course that combines compiler construction with formal languages, automata theory, and computability theory.
If you already know C or C++ or Java then this book is for you. In my opinion, you can learn many computer science concepts and be able to apply to your field. The book will teach you how to write a grammar then write a parser from it then eventually be able to improve it as you go on reading and doing the exercises. It was a great moment when I feel comfortable writing recursive functions since grammars are composed of recursive functions. You'll also learn a nice way on how you can get your compiler to generate assembly code. Another feature of the book is the chapter on Finite Automata wherein you'll learn how to convert between regular expressions, regular grammars and finite automata and eventually write your own 'grep' which was for me is a mind-blowing experience. There are lots of other stuffs in this book that you could learn. Thank you Anthony J. Dos Reis for writing great books for people like me.
It goes through lessons that build up to a pretty good interactive disk editor (DSKPATCH) written entirely in x86 assembly.
Its the book that got 12 year old me out of the BASIC ghetto.
Kalid basically iterates the series around the concept of deriving the formula for the area/perimeter of a circle, and then builds up to deriving the surface area/volume of a sphere. The focus throughout is the building up of an intuition of calculus before leaping into formulas. Even with uni-level calculus, I did strengthen my intuition of what's going on by reading through his book.
It's pretty fun, and I actually spent some time visualizing the calculus of geometric solids afterward i.e http://www.trinco.io/blog/derivative-of-x3
Inform 7 is very much a niche programming language, but it's really interesting and unusual, well worth investigating if you want to broaden your horizons. Vaguely Prolog-like, but written in natural language.
Edit: It may have been this: https://www.amazon.com/Build-Your-Own-Flight-Sim/dp/15716902...
A most excellent grimoire.
You can learn how to build a todo list manager in Go.
This book is about building a webapp from scratch without using a framework.
Build an old-school analog music synth. Very DIY friendly, and great for electronics n00bs. The book is build around a project call the Noise Toaster, but you learn all the analog synth basics along the way. Fun stuff. Old school, and it isn't a wall-sized rack of Moog modules, but hey, good humor.
Swift 2 version: http://shop.oreilly.com/product/0636920045946.do
Swift 3 version: http://shop.oreilly.com/product/0636920053989.do
We build a note-taking app for iOS, macOS, and watchOS.
You build 5 projects through the book - a programming language, paint program, a dom game and a skill sharing website using node js.
It goes from adding the LED to the XBox to tapping the security mechanism. Plus, the original Xbox is cheap nowadays too, so you won't have to shell out a lot of money doing it. Local craigslist should have plenty of them.
1. given a large file, or set of files, write a program/routine to count the number of times an arbitrary sequence of characters appears. No regular expressions or other pattern matching helpers from a library/sdk, you have to do it all yourself. This one is pretty small, but there's lots of opportunity for optimization.
2. build a link shortener service with some analytics/tracking.
3. write a simple tokenizer for whatever syntax/language you feel like. JSON is a super easy one.
4. write a little website crawler. multithread it. implement rate limiting (something more advanced than random sleeps; e.g. token bucket, etc...).
5. make a couple easy data structures yourself. If the language/platform you are working in has the same structure in an SDK (or there's a good open source one), write yours to the same interface and then run it through their test suite. e.g. linked lists, queues, etc...
Ray Tracing: The Next Week
Ray Tracing: The Rest of Your Life
It's basically a set of tutorials that lead you through the steps of building a software 3D graphics rasterizer. It covers rasterizing, lighting, shading, shadows, textures, etc, and the math behind each set of concepts. It's built on late-90s C and DirectX, but the capabilities used are covered by just about any game programming library. The author builds kind of an abstraction library on top of the DirectX code, and that's pretty easy to rewrite in whichever language and toolset you're comfortable with.
About how to build a subset of git's functionality in NodeJS
Radiosity: A Programmer's Perspective by Ian Ashdown is a full numerically accurate hemicube radiative transfer engine from start to finish. Now a free pdf.
There is an associated book. Great intro to the fundamentals of computer engineering.
Filter spam, Parse binary files, catalog MP3s, stream MP3s over a network, and provide a Web interface for the MP3 catalog and server.
 There's an e-book version too. Scroll the page down just a little bit.
It's really good and the game is actually pretty fun.
I wouldn't read the book now though, since Flex, but the approach worked well.
He kinda takes a "Defense of Duffers Drift" approach towards designing iterative versions of the same project, slowly introducing concepts such as factories and singleton.
It was always on my bucket list to learn to write programming languages, but it's very daunting - this made it easy to learn in bite-sized chunks.
This book teaches you python through a series of example projects. You can get it online or order a physical copy and help support the author here:
1 - https://manning.com/books/nim-in-action?a_aid=niminaction&a_...
Think Stats: Probability and Statistics for Programmershttp://greenteapress.com/thinkstats/
Here, you learn statistics by implementing statistics functions in Python along the way and use them to solve the questions in the book.
It goes over building a link aggregating service using Django.
However, I also remember it being free. Doesn't seem to be an option with the most recent release.
Walks through building a JRPG-style game with Lua. Pretty impressive for the price, especially with all the royalty-free assets that are included.
A little dated now, (although DirectX 9 isn't quite dead yet), but this one has some pretty interesting topics. Good chapter on procedural terrain generation, some basic pathfinding, minimaps and fog-of-war.
Bandit Algorithms book is sort of like this. starts out simple and touches different methods
Build your own legit Analog Synthesizer!!! Not software but it is an end to end project. Good little book.
Level 1: Can read and write it at your own pace Level 2: Can comfortably converse in a professional setting (little slang or cultural knowledge needed) Level 3: Can comfortably converse in a social setting (slang, faster speech, less clarification) Level 4: Can do all of the above passively, being able to pick up valuable information just by overhearing conversation without focused metal effort Level 5: Ability to do all of the above in a noisy and hectic situation (like a party, sporting event, etc)
I bet you'd find that this 40% number is much different for students studying abroad in a country where their native language is common. My US friends who moved to the UK or Australia had no problem making friends.
Follow the source link and you find that 40% have no close American friends, which is different from "friends on campus". I checked the source looking for a baseline (how many domestic students have no close friends) and discovered that it was specifically about international students making friends with Americans.
> "Nearly 40 percent of the survey respondents had no close American friends and would have liked more meaningful interaction with people born here"
This is a very different result - still important, but the corrected stat and the free-response listed in the source make clear that we're looking at a different question than simple loneliness.
edit: The HN headline has been updated, which is great news. Now if only Quartz could meet the same standards...
In the US, I didn't realize it would be even more difficult because 90% of the students in MS are foreigners in the college I attended. Anyway, I decided that the best way to make American friends was to be with them all day long. I joined a fraternity on campus.
This was not easy. Most fraternities never had a foreign student, except the one I was accepted in. Because my english was not great, I focused all my time on 1 fraternity to increase my chances of being accepted. I was the only foreigner in all fraternities this week.
It was not always easy, but it was worth it. I joined while doing my MS, all my brothers where freshmen. We had very different work load and about 4-year difference. But I did make friends with all of them. I spent Christmas with one of my friend's family (I didn't leave the US for winter break). It was a great experience.
I'm rather an introvert. But when you travel in a foreign country, you have to talk to strangers all the time. Expectations are also lower when you don't master the language fully. You have to be very direct and explicit in your communications.
In my limited experience I feel there's a real cultural mis-match with kids from other countries coming here to study. Most of the kids I met were from huge cities and the shock of being in a small college town was in many was too much for them. That and the greek system was overtly hostile to them.
On side a note it's humorous what many kids take back with them from their time in America. It's worth doing an image search of "American party" to see what I mean.
The best thing that happened to me was to get an internship in a company where my team was composed mostly of Americans. I learned how to talk slower and more importantly, slowly understood sarcasm as well. Perhaps most Americans don't realize just how much of a shared culture is needed for immigrants to understand before they can communicate effectively.
In every case, they mention sometime during the evening that its the first American home they have ever been inside. After years of school and job. Every, every case.
Americans, we can do something about this! Invite a newcomer coworker to join you for dinner! Its so simple.
There were other kids who joined the school from other countries right around the same time as me (we all did ESL together), but they all spoke languages that were highly represented in the school. Over the years, it was incredibly noticeable to me how insular they ended up -- hanging out mostly with expat friends, speaking their native language on breaks, etc.
Meanwhile, I had to try and make American friends any way possible -- which for me was through our school's robotics club (and since I am typing this here, you can guess that the rest is history)
I completely understand the way immigrants rightfully treasure and celebrate their heritage. But I have always found it puzzling -- especially in college -- to see people from overseas mostly hanging out with their own.
I have heard far too many times how "cold" Americans are, how they aren't friendly to foreigners, etc. At least in my experience, that could not be further from the truth. What I HAVE observed is foreigners like myself failing to leave the safety of their known communities and fulling embracing the experience they supposedly came here for.
We had two big bunches of foreign students, Chinese and US Americans. Americans organized a lot of parties, the Chinese were hard to engage with. We had a big international community (language exchange regular meetings, parties, movies, BBQs, etc.) But the general thing from the Chinese group was what seemed like shyness. Even when one managed to get them to join, they tended to leave early and barely interact (not for a lack of trying). There were exceptions of course, and I'm currently subletting my apartment (while in another country) to two Chinese students, one of whom turned out to be very talkative once he opened up. But for the majority I met it's really hard to get them to get them to open up. There is some cultural barrier that's very hard to break.
Of course this might be the same for other nationalities, but as those usually arrived here alone, they didn't have a group of countrymen to fall back to and I couldn't tell.
Honestly, international students are cliquey. Many of them that I talk to openly admit to cheating on their english proficiency exams universities require you to take before you can attend. Meshing with local students is nearly impossible if you don't understand the language proficiently.
I'd expect you'd see roughly the same numbers if you looked at American students in Chinese universities, or elsewhere. But we have to make this anti-American because its Quartz, and Trump is bad, right?
I guess it all depends on the home country and the American school, but I wonder how valuable an American undergrad degree is worth in their home countries?
I know in Japan it often isn't seen as worth it because in Japanese undergrad programs, the people you meet often are a major part of your network along with the people you went to high school and middle school. And these networks are essential to your career arc.
So, by going to school in the US, you lose out on these networks.
But, if your goal is to get a job at an international company where you speak English or get a job in the US, then I guess it is worth it.
It does not matter. Majority of foreigners, westerners or not, won't have real local friends in English speaking countries.
Even though you do acclimatize to the culture and language you might still not be fond of it. I lived in countries were it would have been easier than in others to interact with people, but in the end I wasn't able to because I either wasn't a fond drinker (commonplace in all english speaking countries) or I didn't enjoy being involved in mundane silly-office conversations during the smoke break.
Most of the fresh expats can't even realize what they are getting into when they move into another country. If you are deciding to do so and you come across this post, do it, go and check it out, but beware that your inner you will never completely mold to that place.
I had the same experience in grad school, I only had "activity" partners, than what I would call close friendships with Americans. It always felt "distant". It's sadly true even today.
 I couldn't find the relevant article that discussed about this, specifically in the American context.
Missing important data for contrast: How many non-foreign have no close friends on campus.
For the curious: I did end up having a much closer group of friends including roughly 50% American students later, though, after bonding over one of the most American experiences possible, a Spring break trip.
First, I think there are two types of people who travel to a new place (this can be a different part of the country, or a foreign student to the USA): 1) the person who wants to try new things and enjoy where they are, or 2) the type of person who hates where they are and refuse to try anything new.
The latter type of foreign student will not make any American friends. They will only stick with their culture and friends of their culture. She has also said a lot of people feel very nervous speaking English, because they will they will get something wrong and get ridiculed for doing so.
As an American with foreigners in my classes, there are also people who speak their native language to other students who speak it. I empathize with the fact that it is easier for them, but by doing that, the effect that I get is the feeling of exclusion, so I cannot even attempt to try to befriend them. As an anecdote to that, I can say I have felt resentful when I have been the only American in a group of Taiwanese and they did not speak English at all when I was there, as the message I got is they don't want me in the conversation.
I hate to say, it is very intimidating, but if you want to befriend Americans while in the country, you HAVE to speak English as much as possible. I am, and I would like to think any others are very forgiving in the fact that they know English isn't your first language, and are happy to accommodate that. If you do not, most Americans will feel excluded and not even attempt to befriend you.
That being said, the cultural aspect can't be ignored. There's a reason that despite going to two very international schools, almost all of my close friends are European, American, or Australian.
My point is: as a newbie in this country, I had to make the effort; if I didn't do that, I would not have made those friendships.
He also got to laugh hysterically as he saw me try to pronounce some greetings/messages in Cantonese to his father who would call from Hong Kong, so there was that benefit too.
I think the reason was the hyper-competitiveness between students. It doesn't foster any sort of cooperation.
I came to the US in 2007 for my Undergrad from Dubai (Indian by Birth). I lived on a dorm floor with 40 people and only another Indian and I didn't actually talk to that guy much. Most of my friends are Americans including some of my closest - To be honest, the few Indian friends I have are people from work.
End of the day it comes down to your comfort zone. People who come to another country to get an "American degree" will stick to their comfort zone. For those of us who come to explore and understand the local culture, we're going to assimilate into the local culture (my kickball team calls me a coconut: Brown on the outside and white on the inside).
Do you want to stay a tourist or become a local?
There is no easy way. You have to separate yourself from your comfort zone. That includes others from your home country as well as other international students. Live with an American roommate, go to every (American) party you are invited to. Say no to every (non-American) party you are invited to. Find an American gf/bf or keep trying. Join volunteering activities (food drives, blood drives, salvation army etc) to meet locals.
In a couple of years, you would have made yourself deeply uncomfortable on many occasions, annoyed some people, but by now you'll be talking and walking like an American.
This applies in general to immigrants who tend to huddle together because it is the easier thing to do. That is why in most cases, cultural assimilation takes atleast a generation.
If you'd like to read about another perspective to how international students feel after coming to the US, here's an opinion piece I wrote: http://www.dailynebraskan.com/opinion/agrawal-us-universitie...
If you'd like to read more about what it's like to be an international student making friends with domestic students: http://www.dailynebraskan.com/opinion/agrawal-making-friends...
Here's an article one of my colleagues wrote about making friends with international students as a domestic student: http://www.dailynebraskan.com/opinion/simon-making-friends-w...
At some point sub-consciously I stopped trying and went into my comfort zone i.e. other Indian people who got my jokes, and where I didnt have to give cultural context before every life story I was telling. I do regret not having made friends from alternate cultures while I had the best opportunity i.e. in college.
In fact it is much easier to make friends with other foreigners, even when not fluent in country X's language.
Today my closest friends are the ones I made in college.
Anyway, I was 6 months into a language course that was heavily populated by US-born Korean students who were taking it for the easy A... I needed a tutor to keep up so I reached out to the teacher who introduced me to a few Korean exchange students.
They lived in their own apartment, not the dorms. They cooked their own food, didn't go to the cafeterias. Fast forward a bit, my fraternity had a charity event and I invited a few of them... was a casual invite, said something like, "Hey we're doing this thing, tell your friends!"
A week later at the charity concert like 60 Korean exchange students showed up. Every single one of them was dressed in a tux or evening gown. Totally classed up the place. Had no idea there were that many exchange students until that night.
And they were all really appreciative of the invite. Basically said no one had invited them to any events on campus before... I met some new folks, knew just enough Korean at that point to ingratiate myself and get invited out drinking after the event... and quickly realized I was playing checkers at a chess tournament when it came to drinking with Koreans. Ha.
Made some friends out of the deal, but it wouldn't have happened without everyone going outside of their comfort zone a bit.
Homophily is a known social bias, and certainly affects people in their own country let alone a foreign one. I wonder how much of this is "America" per se, versus standard social forces that affect all populations.
how does this compare to foreign students in Canada, Germany, UAE, UK
You hang out with Indians and they start talking about Cricket, a sport that is widely watched there, but also a sport that most likely you know nothing about. How can you partake in that conversation? you can, probably, but only at a basic level and you might not have a lot to add.
So, the same happens with American football, or baseball, or sports that are not widely followed elsewhere.
Those types of conversations marginalize foreigners, even if it's non intentional. Now, if you have empathy, you might prefer to talk about something else, with the purpose of being inclusive.
Before they left, he told my friend and I that he was doing this because he read a statistic showing that a huge percent of Indian-born workers have never been in an American's home, so he wanted to get his coworkers out of that statistic.
In hindsight, that act was the only decent thing that man has ever done (he is no longer on speaking terms with me, my friend, or virtually everyone else we know for a variety of reasons that have nothing to do with his co-workers).
There aren't too many American citizens in these colleges and I will not be surprised if these student constitute 50% of total F1s.
You have more in common with fellow travelers than you will have with people who are just status quo.
The current best practice for border crossings --- really anywhere in the world --- is simply not to carry anything you're unwilling to unlock for Customs.
This is going to get harder still. CBP will begin asking everyone for Facebook logins. You'll think of 10 different ways to conceal your Facebook doings from CBP, but CBP has advance traveler's manifests from flights and will know that people have profiles --- and, sometimes, what was on those profiles.
No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation.
Maybe even article 9 if the law does not clearly establish what is allowable.
Everyone has the right to liberty and security of person. No one shall be subjected to arbitrary arrest or detention. No one shall be deprived of his liberty except on such grounds and in accordance with such procedure as are established by law.
EDIT: Turns out the US did not really ratify this treaty.  So no human rights in the US, or at least not all of them enforceable in court. There may of course be similar rights from other laws or treaties.
"While CBP Officers are responsible for the examination of electronic devices, only Supervisorsmay authorize the copying of the contents of an electronic device.54 Where an electronic device is to bedetained or seized by CBP, a CBP Supervisor must approve of the detention or seizure, and the CBPOfficer must provide a completed CF 6051D or S, respectively, to the traveler.55 Where a traveler claimsthat the contents of the electronic device contain attorney-client or other privileged material, the CBPOfficer must consult with the local Associate/Assistant Chief Counsel or United States Attorneys Officebefore conducting the examination.56CBP Supervisors may authorize the sharing of the travelers information for assistance or otherlaw enforcement purpose on a case-by-case basis"
If that works, that could be an interesting business opportunity. Set up a series of phone exchange stations near major airports, where outgoing travelers can swap their smartphone for a burner. You ship the smartphone to your station near their destination, and when they arrive they can turn in the burner and retrieve their smartphone.
You could also add in a temporary backup/restore service for laptops. A traveller could bring their laptop to your station when they are preparing to leave, and you do a backup of all user data and then delete it from the laptop.
While they are in transit, you send the data to your exchange station nearest their destination. When the traveller brings their laptop in, the user data gets restored.
Anyone noticed that the first two example cases are actually political activists being targeted.
- the first is some Chelsea Manning advocate- the second is some pot legalization activist
There you have it for what actually are these rarely used rules used for.
But, hey, what's new? The US has imprisoned innocent people, and tortured them multiple times a day for years and years. This is public information, and no one gives a shit. The US government really doesn't care about anyone who is not a US citizen. (And sadly, western countries mostly just let the US do whatever they want)
A women is being locked up for 8 years because she checked the wrong box on her voting form, and it is pretty common that unarmed black people get shot by cops without serious consequences.
I understand that people are upset about the possibility of having to unlock their phones, but there are far more serious indications that the US is not the great country it wants be. A lot of common sense and decency it missing in the US - and not just in the Trump administration.
If the CBP then claims that it's a useful tool/practice because they can identify bad people who shouldn't be allowed in, they should be asked to provide stats on how many people were denied entrance after their devices were checked. My guess is that number is statistically meaningless, probably in the <1% range for people who actually unlock a device.
One of the only exceptions I can think of is cases like this one:http://cyb3rcrim3.blogspot.com/2016/04/the-laptop-child-porn...where there is already a legitimate investigation going on and a person is suspected of serious criminal activity and they don't have enough for a warrant. In these cases, the CBP uses the rules regarding border re-entry to their favor to compel a search of a device because they have at least some sort of reasonable suspicion that a device may contain data for which the mere possession of is highly illegal. Other than that, I'm hard-pressed to think of any cases where the mere possession of data would be a felony unless they suspect you of having classified information without the appropriate clearance.
The real question we should be asking is how to make this a big enough issue that it gets mainstream attention and action.
We shouldn't try to figure out how to continue down the fascist path but how to change it's direction.
In any case, if you find yourself in this unfortunate situation (you shouldn't - you should travel with an empty phone when crossing the border, or at least wipe it before crossing then do a restore over icloud later), here's something you can do: Before going in (if you have an iphone), make sure you have a strong alphanumeric passcode. Enable auto-wipe on 10 incorrect passcode attempts. Then, turn the phone off. If CBP then asks to turn your phone on and wants your password, appear eager to comply, but give them an incorrect one, and when that doesn't work, admit that you had just changed it a few days ago to something more secure and that you may have forgotten it, but don't worry, you always just use the thumbprint to unlock the phone anyway and volunteer to open it that way. Always appears helpful. The phone still won't unlock because iphone requires passcode entry after restart. If they ask why you turned the phone off, just say that's what you normally do when entering border control. Unless you're under some kind of serious suspicion of some kind at this time, they're unlikely to detain you for forgetting your phone password, and even if so, it's locked with a strong passcode so there's no much they can do anyway.
In other words the CBP were completely unhelpful and simply referred them to published legalese online rather than clarifying that legalese. It really feels like the culture of DHS and CBP views the people they are supposed to serving as adversaries.
And what if my battery is dead or my phone stops working? Can they force me to log in to their own Android phone?
What if I don't carry any phone at all, can they simply force me to log in to an empty phone? (Assuming everyone has an Android or Apple phone or is going to spend couple days in detention.)
If my work requires me to travel there, I will simply ask to have electronic material provided in the US, and download whatever tools I need once inside the US.
I would advise anyone traveling to have versioned working tools stored online anyhow.
Even better, do a cloud backup and wipe out your phone or laptop, and restore a backup when you are out?
Or am I missing something?
It bothers me that the US is providing a terrible example of how to treat visitors.
I've been thinking recently to just close all the accounts, including email.
This will leave me in a state where I have pretty much no social accounts, just 1 email account which is used for friends/family.
Are there any cases of individuals giving up their phone and it being a wasteland of information? I personally do not use facebook, google-plus, twitter, linkedin or anything else. I literally do not have an opinion, nor do I care to share it!
Edit: Do they also check when email accounts have been created? How could they know, unless they checked in with the provider? I'm thinking of closing my main account, requesting data be deleted and starting afresh!
The real problem here is downloading of data from the phone. Once that data is off-loaded, the government can make it a part of their "Seven-degrees-of-Kevin-Bacon-Osama-Bin-Laden edition" game, stored forever, perpetually looking for "connections" no matter how remote. That's a scary and slippery slope to surveillance state hell. I don't think such practices are commonplace, not yet?
Has anyone here ever been in a situation where the CBP confiscated their phone/laptop or downloaded data electronically from it?
This is a precarious situation to be in an I'm surprised if this does not violate some provision of the law. Worse, there have been conflicting judgments on similar cases [0, 1].
On a side note, can law itself be unbiased due to the nature of the cases heard in courts? A situation X regarding certain law LX might be heard 1000 times and would be more clear in contrast to a situation Y concerning with law LY as it went to court just 3 times. Can one computationally figure out laws which are unclear based on the number of times they are references in court judgments or some other similar parameters?
Presumably they wouldn't just shrug and say 'OK, then' - but neither could they (if the classifying authority was one they cared about - say, NATO) just say 'Tough luck, now unlock it!', right?
Many of the complaints here are a (justified) fear that privacy invasion occurs as a side-effect of a clumsy and poorly thought-out attempt to prevent something illegal. So if I go ahead and sign up for the background check + interiew to let them see I don't have ill intentions, would this reduce the likelyhood of collateral privacy invasion when actually crossing the border?
I've been considering this for convenience reasons but am curious if it might help in this instance.
: There is also the issue of restricting freedom and declaring people "illegal" when that runs against the spirit of this country, but I'm not focusing on that here.
Civil rights often tend to be only as useful as our knowledge of what exactly they are.
I would suggest saying something like "sure thing, but i use this device to talk to my lawyer and i should ask him first if it is ok." They wont want the hassle of dealing with this perfectly reasonable request. (Don't fib. Have a lawyer first. All you need to do is email one a couple times for that statement to he true.)
And recently I'm hearing about this -- it largely went ignored during the previous administration but now it's a major concern, articles on HN, my Facebook feed, Reddit, ...
Name it something helpful like "do_not_open_this_file.zip".
They can't say you didn't warn them ...
As one single returnee who is not caught might actually cause dozens or hundreds of citizen deaths, there shouldn't be a screening exception for smartphones or computers.
We can all argue that the current heuristics / profiling methods are not good enough, but as an EU citizen I'd be glad if my government would actually be as straightforward about screening travellers as USCBP is. If travelers - citizens or not - want to return after learning to kill or taking part in some sort of criminal activities, or even announcing their support for such criminal activities in social media, they should be held accountable for their actions upon returning by strict border controls.
I implemented a slight variation on this CNN using Keras and TensorFlow for the third project in term 1 of Udacity's Self-Driving Car Engineer nanodegree course (not special in that regard - it was a commonly used implementation, as it works). Give it a shot yourself - take this paper, install TensorFlow, Keras, and Python, download a copy of Udacity's Unity3D car simulator (it was recently released on GitHub) - and have a shot at it!
Note: For training purposes, I highly recommend building a training/validation set using a steering wheel controller, and you'll want a labeled set of about 40K samples (though I have heard you can get by with much fewer, even unaugmented - my sample set actually used augmentation of about 8k real samples to boost it up to around 40k). You'll also want to use GPU and/or a generator or some other batch processing for training (otherwise, you'll run out of memory post-haste).
> Machine learning is the science of credit assignment. The machine learning community itself profits from proper credit assignment to its members. The inventor of an important method should get credit for inventing it. She may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it (but not for inventing it). Relatively young research areas such as machine learning should adopt the honor code of mature fields such as mathematics: if you have a new theorem, but use a proof technique similar to somebody else's, you must make this very clear. If you "re-invent" something that was already known, and only later become aware of this, you must at least make it clear later.
I feel like I should find a lot more info on this in the sentiment analysis literature but I don't really.
I've been reading a number of this papers but it's really tough to understand the nitty gritties of it.
Even a simple algorithm would be effective: the number of citations for each paper decayed by the age of the paper in years.
I want to buy a Raspberry Pi Zero, put it in a nice case, add to push buttons and turn it into a car music player (hook it into the USB charger and 3.5mm jack in my car). The two buttons will be "like" and "skip & dislike". I'll fill it with my music collection, write a python script that just finds a song, plays it, and waits for button clicks.
I want the "like" button to be positive reinforcement and the "skip & dislike" to be negative reinforcement.
Could someone point me in the right direction?
If you can sim a set of boxes, you can learn whats inside them.
I went through an intense several year period of studying jazz music beginning around age 16. I remember the discipline it required and the speed at which I was able to progress.
When I got into programming at age 33, it felt exactly the same. The discipline, the hours, the speed of progression; all very similar. Having devoted many thousands of hours to mastering a skill in my teenage years and again in my 30s, I'm not aware of any differences in capacity to improve at one age over the other.
I attended a boot camp and had ~25 classmates all over the age spectrum. Some were smarter than others, some worked harder than others, but there was no age correlation on either of those observations. I also didn't see any age correlated patterns in success in the job market. Some old and young students got jobs immediately; some old and young students had to fight it out a little longer.
If I lose my passion for software at age 40, 50 or 60, I would be very open to pursuing something new at any point in my life.
In my case, I already knew Python and Ruby from working in ops, so it wasn't completely new to me as was the case for the people in the article. What is new to me is having larger programming tasks that require focus and teamwork. The nature of ops tends to involve a lot of context switching and as a result most of my projects were small and self contained. Learning how to collaborate with others on the same code base is a big adjustment.
I am also incredibly impressed with how patient my colleagues are. I know "how to code" broadly, but that's not the same thing as "how to be a software developer." I'm fortunate enough to have the opportunity to learn as I go.
It is always a mix of: "quit now, it take lots of years to become a developer worth of its name", "don't do it, software development is not the glamorous job you think, it's awful" and "give it up, you just won't be hired for a good job that easily".
An impression I often have is that there are developers who have some kind of resentment reaction to the "everyone can/should learn to code". Like new kids sneaking in your own private club. Outsiders trying to be like you. There is only one type of developer: that kid that started to code when he was 12. All others are impostors and wannabes.
I'm glad that there is this support sentiment for stories like this as well.
It's nice to see the success stories but I'm always wary of survivorship bias. If there are ten people who couldn't make a go of it for every one of these stories, it puts a different view on things.
(I say this as a 37 year old freelance writer currently learning Elixir and React in the hope of shifting careers.)
I decided to learn programming at 38, started at 39, and now at 41 I am in the process of releasing a commercial version of software I created. I moved into this field from medicine, to have greater control over my life, scratch my entrepreneurial itch and broaden my horizons. I have no regrets.
You can do it if you really want to, and at any age.
I actually credit hacker news for a lot of it because repeating what I read here makes me seem a lot smarter than I am, I think.
I'm 36, started programming at about age 6 or 7 (thanks largely to my mother being a programmer and helping me learn the basics), and still spend a lot of my free time on personal projects. I actually didn't start programming for a living until age 26 (did a stint in the Army right after college).
I haven't had a significant raise in the last 4 years. In fact, I made a higher salary (albeit in a higher COL area) 5 years ago. Early in my career I was getting 20%+ pay bumps just for switching jobs. That doesn't happen anymore. More often than not I have to make my salary expectations clear from the first conversation with a potential employer lest I waste a lot my time only to receive an offer 30% lower than my current salary.
It's the point I'm thinking of leaving the field (I'll never stop programming in my free time, though) and finding something else with some actual upward mobility.
I know for certain older workers will face subtle and not so subtle discrimination which makes it incredibly unfortunate that companies don't focus on it more in their workforce diversity initiatives. Is this a problem companies are less willing to confront? Compared to say, gender and race diversity?
It is also one of the few disciplines that is included in part-time degrees at some universities. This means you can even be taught radically new things in the field without necessarily giving up your day job.
This isn't to say it's not worth becoming a dev at a later age -- coding is an increasingly crucial skill for entrepreneurs -- but it seems cruel to entertain the myth that older devs get hired by the handful.
I'm 25, I started (properly) learning and liking coding about a year ago. Seeing people who are 4-5+ years younger than me with more knowledge and experience is discouraging. It doesn't help than in a job interview I was told "Why should we hire you when there are people younger than you with more experience"
One thing you'll likely have in your favour is knowledge of another domain to a degree that perhaps no other programmer has.
Even so you might have to go the start up route and then unless you have strong marketing skills you're unlikely to do well.
"Is it just you? What happens if you die?" is a question you'll get over and over.
- it calls into question the individuals motivation and ability to materialise that motivation - this is impossible to measure.
- there is no such thing as a "software developer". We write software, this is true; however software is so ingested into society that one software engineer may be performing an entirely different role to another. And different roles naturally demand differing skill sets - this is impossible to specify.
So, without meaning to offend, bundling up the entire aged population and asking if they can do a job that is hugely variable is a bit of a non-sense.
There was significant risks associated with my decision, financial burdens (loans, credit card debt accumulation), and also some opportunity costs of not earning income for over a year. I quit my primary job, leaving me with no safety net. It was scary at my age to do this, but by taking on such huge financial risks I was more even motivated to succeed - to fail would have been devastating.
We learn skills out of necessity or out of passion, sometimes out of both. Whether we become good at something depends on a number of factors, and obviously there're efficient older workers and incompetent younger workers in every industry, and this dualism applies to programmers as well.
A computer is a means to an end. People in disciplines such as biology, mathematics, sociology learn and use programming as a tool to solve their day-to-day work-related problems because it helps boost their productivity.
While I'm still struggling with the first one, I found the rest are all very well achieved.
To all late starters: keep going as long as the resource is available and good luck to all of us.
edit: typo and replace "tech" with a more detailed description.
If you have the interest you will pick up the stuff you need for the job fast. Even though many of my colleagues have been programming since childhood I feel that their growth curve has leveled out compared to mine, at the point that we're on par in some areas.
The people shown in the article were unexpected I think they are even more interesting to jump right into programming from a jobs that you wouldn't expect.
Just because you can be an entrepreneur before 30 doesn't mean you'll have the life experience to be good at it, and many junior-level software development jobs expect a level of commitment that's unfeasible for people who "have gotten a life".
Otherwise the point is mostly useless. A faster rendering implementation is almost useless if the output doesn't look as nice. I wouldn't care if the text I'm reading over 5 minutes takes 200ms vs 800ms to render.
Once upon a time, the Mac was a great development platform...
In practice I would expect, say, a gaussian filter to be both easier to approximate and less prone to aliasing artifacts. Apparently that expectation is completely wrong though, since nobody seems to implement it that way! What's so special about vector graphics that makes the box filter behave well?
Happy to answer any questions :)
Months ago I posted a few screenshots on Twitter (https://twitter.com/datenwolf/status/714934185564225536), and the comment by Michael IV is spot on. The renderer has no problem with sharp corners, but so far the glyph preprocessor still struggles with it and I have to manually adjust the emitted output to get nice results.
Hinting is not obsolete.
Unfortunately AGG 2.5 is now GPL so if you need to stick it into anything closed source you are stuck using 2.4's modified BSD.
That said I think having a CPU tessellation path is going to be critical if you want to see wide adoption. Platforms like Android and the like don't always have geo shaders which is why you see FreeType so widely used.
: https://youtu.be/XnDYuQUN4J0?t=1060: https://github.com/damelang/nile
~/dev/rust/pathfinder$ cargo run --release --example lorem-ipsum -- resources/tests/nimbus-sans/NimbusSanL-Regu.ttf Downloading clap v2.20.3 Downloading image v0.12.3 Downloading bencher v0.1.2 Downloading quickcheck v0.4.1 Downloading semver v0.2.3 Downloading glfw-sys v3.2.1 Downloading enum_primitive v0.1.1 Downloading nom v1.2.4 Downloading cmake v0.1.20 Downloading gcc v0.3.43 Downloading vec_map v0.6.0 Downloading unicode-segmentation v1.1.0 Downloading ansi_term v0.9.0 Downloading unicode-width v0.1.4 Downloading term_size v0.2.2 Downloading strsim v0.6.0 Downloading gif v0.9.0 Downloading glob v0.2.11 Downloading png v0.6.2 Downloading scoped_threadpool v0.1.7 Downloading jpeg-decoder v0.1.11 Downloading color_quant v1.0.0 Downloading lzw v0.10.0 Downloading inflate v0.1.1 Downloading deflate v0.7.4 Downloading adler32 v0.3.0 Downloading rayon v0.6.0 Downloading deque v0.3.1 Downloading num_cpus v1.2.1 Downloading env_logger v0.3.5 Downloading regex v0.1.80 Downloading aho-corasick v0.5.3 Downloading thread_local v0.2.7 Downloading regex-syntax v0.3.9 Downloading memchr v0.1.11 Downloading utf8-ranges v0.1.3 Downloading thread-id v2.0.0 Compiling adler32 v0.3.0 Compiling utf8-ranges v0.1.3 Compiling ansi_term v0.9.0 Compiling color_quant v1.0.0 Compiling term_size v0.2.2 Compiling enum_primitive v0.1.1 Compiling lzw v0.10.0 Compiling scoped_threadpool v0.1.7 Compiling bencher v0.1.2 Compiling winapi-build v0.1.1 Compiling inflate v0.1.1 Compiling unicode-width v0.1.4 Compiling unicode-segmentation v1.1.0 Compiling glob v0.2.11 Compiling nom v1.2.4 Compiling gif v0.9.0 Compiling num-integer v0.1.32 Compiling rand v0.3.15 Compiling memchr v0.1.11 Compiling aho-corasick v0.5.3 Compiling regex-syntax v0.3.9 Compiling deflate v0.7.4 Compiling gcc v0.3.43 Compiling semver v0.2.3 Compiling strsim v0.6.0 Compiling deque v0.3.1 Compiling winapi v0.2.8 Compiling num_cpus v1.2.1 Compiling vec_map v0.6.0 Compiling lord-drawquaad v0.1.0 (https://github.com/pcwalton/lord-drawquaad.git#171a2507) Compiling num-iter v0.1.32 Compiling kernel32-sys v0.2.2 Compiling clap v2.20.3 Compiling num-complex v0.1.35 Compiling thread-id v2.0.0 Compiling thread_local v0.2.7 Compiling num-bigint v0.1.35 Compiling rayon v0.6.0 Compiling cmake v0.1.20 Compiling jpeg-decoder v0.1.11 Compiling num-rational v0.1.35 Compiling num v0.1.36 Compiling glfw-sys v3.2.1 error: failed to run custom build command for `glfw-sys v3.2.1` process didn't exit successfully: `/home/mich/dev/rust/pathfinder/target/release/build/glfw-sys-66de5311db1a83bd/build-script-build` (exit code: 101) --- stdout running: "cmake" "/home/mich/.cargo/registry/src/github.com-1ecc6299db9ec823/glfw-sys-3.2.1/." "-DGLFW_BUILD_EXAMPLES=OFF" "-DGLFW_BUILD_TESTS=OFF" "-DGLFW_BUILD_DOCS=OFF" "-DCMAKE_INSTALL_PREFIX=/home/mich/dev/rust/pathfinder/target/release/build/glfw-sys-79c50ef4a5edfcd6/out" "-DCMAKE_C_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_C_COMPILER=/usr/bin/cc" "-DCMAKE_CXX_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_CXX_COMPILER=/usr/bin/c++" "-DCMAKE_BUILD_TYPE=Release" -- The C compiler identification is GNU 6.2.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Could NOT find Vulkan (missing: VULKAN_LIBRARY VULKAN_INCLUDE_DIR) -- Using X11 for window creation -- Configuring incomplete, errors occurred! See also "/home/mich/dev/rust/pathfinder/target/release/build/glfw-sys-79c50ef4a5edfcd6/out/build/CMakeFiles/CMakeOutput.log". See also "/home/mich/dev/rust/pathfinder/target/release/build/glfw-sys-79c50ef4a5edfcd6/out/build/CMakeFiles/CMakeError.log". --- stderr CMake Error at /usr/share/cmake-3.5/Modules/FindX11.cmake:439 (message): Could not find X11 Call Stack (most recent call first): CMakeLists.txt:192 (find_package) thread 'main' panicked at ' command did not execute successfully, got: exit code: 1 build script failed, must exit now', /home/mich/.cargo/registry/src/github.com-1ecc6299db9ec823/cmake-0.1.20/src/lib.rs:573 note: Run with `RUST_BACKTRACE=1` for a backtrace. Build failed, waiting for other jobs to finish... error: build failed
The sort pkg now has a convenience for sorting slices, which will be a nice shortcut instead of having to define a special slice type just to sort on a given criteria, you can just pass a sorting function instead.
HTTP/2 Push is now in the server, which is fun, but like context might take a while for people to start using in earnest. Likewise graceful shutdown. Is anyone experimenting with this yet?
Plugins are here, but on Linux only for now - this will be interesting long term for things like server software which wants to let other compile plugins for it and distribute them separately, presently that has to be compiled in to the main binary.
Performance: GC times are now down to 10-100 microseconds, and defer and cgo are also faster. Compilation time improving but still not close to 1.4.
GOPATH is optional now, but you still do need a path where all go code is kept, perhaps eventually this requirement will go away - GOPATH/pkg is just a cache, GOPATH/bin is just an install location, and GOPATH/src could really be anywhere, so I'm not sure if long term a special go directory is required at all if vendoring takes off, then import paths could be project-local.
There's a slide deck here with a rundown of all the changes from Dave Cheney:
Finally, as someone using Go for work and play, thanks to the Go team and everyone who contributed to this release. I really appreciate the incremental but significant changes in every release, and the stability of the core language.
A useful post would be waiting for golang.org to be updated and linking to the official release notes.
Edit: Thanks to whoever updated the link to point to something useful at least. Still would have preferred this came down until the release was actually posted.
Caddy already has a few interesting ideas on how to use this: https://github.com/mholt/caddy/pull/1215#issuecomment-256360...
But after upgrading to 1.8 I am now observing 3-4% binary increase vs 1.7, so the trend is again reversed back to fatter binaries. :(
so go is becoming more and more better for lower GC tasks
I've just removed my previous version and then I realize there's no 1.8 on their website.
However, the other difficult thing about power law distributions is that the dataset size requirements for proper determination of the fact that it's a power law distribution are occasionally incredibly difficult. So their critique is very strong, given the comparative lack of data. It is often the case that computer systems, with the overflowing reams of data, are still not enough. Note that the paper I cited up there suggests MLE and then a Kolmogorov-Smirnoff test, so it'll say a lot of things aren't power laws that could well be.
Another way to look at it is from a more geometric point of view. The metric entropy of any generic system of variables is defined as the sum of the positive Lyapunov exponents: and as an "entropy" that quantity does have a lot of commonalities with the other entropies. But to have positive Lyapunov exponents is often to have a chaotic dynamics, so it could just be conjectured that the time series of commits and merge octopus sizes in kernel git history is chaotic, so the evolution of the time series will be fractal in nature.
But it's also really fucking hard to confirm or deny that one, because there are varied and strange definitions of chaos itself and the methods that have been suggested to measure Lyapunov exponent in real systems are arcane and difficult. You could try some synchronization methods, but they remain arcane and crap. Fractal measurement methods are also shitty and full of dark magic.
One neat little trick might be to discretize the series, symbolic dynamics-style (it's already discretized but discretize further, into like percentiles or something) and run it through one of the dynamical machine learning dealies to see if there's patterns. Not too much literature on that but it's a thing that some randoes in like 2004 or something did
commit 13e652800d1644dfedcd0d59ac95ef0beb7f3165 Merge: 4332bdd 88d7bd8 88d7bd8 Author: David Woodhouse <email@example.com> Date: Sun May 8 13:23:54 2005 +0100 Merge with master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Here's how I would do it:
time git log -m --first-parent --shortstat --pretty="%H" --min-parents=2 | grep -v '^$\|3e1dd193edefd2a806a0ba6cf0879cf1a95217da' | sed 's/.* file.* changed,//' | sed 's/insertion.*,/+/' | sed 's/deletion.*//' | sed 's/insertion.*//' | sed 's/^\ \(.*\)\ $/\$\(\(\1\)\)/' | xargs -d '\n' -L 2 echo echo | bash | sort -k 2,2 -g
Of course "--first-parent" doesn't guarantee that we're walking the mainline (see: https://developer.atlassian.com/blog/2016/04/stop-foxtrots-n... ), but it usually is.
On my laptop it takes 3 mins 30 seconds. Here are the 5 biggest merges by this definition:
099bfbfc7fbbe22356c02f0caf709ac32e1126ea 463702 3f17ea6dea8ba5668873afa54628a91aaa3fb1c0 466320 ce519e2327bff01d0eb54071e7044e6291a52aa6 500074 7ea61767e41e2baedd6a968d13f56026522e1207 504965 f063a0c0c995d010960efcc1b2ed14b99674f25c 569691
099bfbfc7fbb 2015-06-26T13:18:51-07:00 Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux 3f17ea6dea8b 2014-06-08T11:31:16-07:00 Merge branch 'next' (accumulated 3.16 merge window patches) into master ce519e2327bf 2009-01-06T17:04:29-08:00 Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6 7ea61767e41e 2009-09-16T08:11:54-07:00 Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6 f063a0c0c995 2010-10-28T12:13:00-07:00 Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging-2.6
Showing 126 changed files with 14,128 additions and 20,617 deletions.
(ok, I'm pretty proud of reducing code size by 6k+ lines while improving lots of stuff, but the commit is a shitshow)
The etymologically correct plural is octopodes. (Some people accuse "octopodes* of being pedantic, but as I see it "pedantic" is just a euphemism for "correct in a way I don't like".)
$ git log | wc -l
$ git log --oneline | wc -l
I built a small web UI where developers could select and unselect development branches, and it would octopus-merge all selected branches into the master branch, and force-push that state onto the QA branch (and deploy it to QA, of course). So QA would always be master + all development branches that were currently being verified. By using a Github webhook, it would update the QA system whenever master or one of the branches being verified was pushed to. I'm not in that team anymore, but I think that deployment tool is still humming along nicely.
One could say that the distribution has a fat one-sided tail though.
Perhaps git should throw a warning when you try to do an octopus merge with more parents than an octopus has legs. If you really want to proceed, add the --cthulhu option. The default behavior would be --no-cthulhu.
the Tor Browser might be the least safe browser to use of all available browsers that can be installed on modern computers. It is a perfect storm of "inferior security design" and "maximized adversarial value per exploit dollar spent". / Don't use Tor Browser.
He recommends Chrome (presumably over the Tor network). I tend to believe the expert, because IME real security expertise (as opposed to technically sophisticated people reading about security and trying to DIY) is rarely utilized and applied even by prominent organizations and projects. But I wish someone would reconcile all of this.
EDIT: Some clarifying edits
The security it provides is marginal, but it's so simple that it's not the part of anyone's stack that's most likely to be compromised.
I think a significantly better version of this could be built. What makes doing that tricky is that you want to retain the almost hello-world simplicity of this app, because the big reason not to run something like this is the likelihood that the server itself will have flaws.
On the other hand, it's 2017, and you can also accept files over secure messengers.
Amusingly, people seem to think that these are bad things to say about an application like SecureDrop.
The gateway site is only accessible over HTTPS, then it's to an .onion via a link to Torbrowser, and mentions of TAILS, all caveats with using the stated software applies though.
An excellent alternative to SecureDrop. At least so it seems...
One rough idea is that large organizations make specific press releases or announcements, that a precommitment could demonstrate privileged access to.
Another idea would be inclusion of some internal communication, which other members of the organization could confirm. This would require those other members to be sympathetic to the leaking, and also not worried about reprisals for speaking publicly like so. This probably isn't useful on its own, but the basic mechanism could be combined with other means to derive utility without public attestation.
The biggest issue is (of course) an adversarial organization subtly changing to-be-published information, to sniff out the actual leaker. Which is why I'm envisioning the need for some formality that could quantify and mitigate such leakage.
The problem I see is that there will be no more important leaks:
a) Given how around 50% the US population was brainwashed by government and media into believing Snowden is a traitor,
b) Given the fact that America has elected a president who wants Snowden executed,
c) Given that the NSA has locked down their systems completely since Snowden's revelations.
Who would want to take these risks to leak anything just to be put on "the list" by their own country and People?
If Snowden's leaks were not enough to get people thinking then the only thing that will is serious pain and suffering. And that is what I personally expect to come (for the lower and middle class, at least).
At DuckDuckGo our overall goal is to raise the standard of trust online. To do that we've focused heavily on search, but try and support organizations that push privacy forward in other ways.
For the 7th year in a row, we've announced our donations to organizations and FOSS projects that help keep everyone a little safer in our digital world.
$100,000 - Freedom of the Press Foundation
$75,000 - World Privacy Forum
$29,000 - Whisper Systems
$25,000 - Privacy Rights Clearinghouse
$25,000 - Tor
$25,000 - Electronic Frontier Foundation
$5000 - American Civil Liberties Union
$5000 - Access Now
$2500 - The Calyx Institute
$2500 - Center for Democracy & Technology
$1000 - Restore the Fourth
$1000 - Patient Privacy Rights
$1000 - Online Trust Alliance
$1000 - Tech Freedom
$1000 - Demand Progress
I love their vision, the services they provide, and what they've done here (with the donations) <3
As a bonus, you will foist some competition on the market as companies which thought they were on an acquisition path will now be forced to show real profits.
Seems like a great company and search engine to me.
When you look at your database as a dumb datastore, you're really selling short all of the capabilities that are in your database. PG is basically a stack in a box.
Whenever I started getting into Elixir and Phoenix and realized that the entire Elixir/Erlang stack was also basically a full stack on it's own...and that by default Phoenix wants to use PG as it's database...I may have gone a little overboard with excitement.
If you build with Elixir and PostgreSQL you've addressed almost every need that most projects can have with minimal complexity.
It wasn't an "we do this at scale" talk, but I'd love to see more experiments like it.
For the impatient: Skip to 17 minutes into the video, where he describes the previous architecture and what parts are replaced with Postgres.
Listen/Notify work great for short-term job queues. For longer term ones, you have some serious difficulties on PostgreSQL which require care and attention to detail to solve. In those cases, of course, you can solve them, but they take people who know what they are doing.
Also in terms of storing images in the database, this is something that really depends on what you are doing, what your database load is, and what your memory constraints are. At least when working with Perl on the middleware, decoding and presenting the image takes several times the RAM that loading it off the filesystem does. That may not be the end of the world, but it is something to think about.
Also TOAST overhead in retrieved columns doesn't show up in EXPLAIN ANALYZE because the items never get untoasted. Again by no means a deal breaker, but something to think about.
In general, PostgreSQL can be good enough but having people know know it inside and out is important as you scale. That's probably true with any technology, however.
Out of curiosity, does anyone have a favorite article saved that does a great comparison of when to use certain databases?
For SQL, complex queries, and data warehousing: yes. It's an excellent database and I'm not sure why you'd pick another SQL DB unless it were a lot better on point two.
For high availability and scaling: no, absolutely not.
The problem with the latter is an arcane deployment process and arcane error messages that provide constant worry that you're doing something wrong. It's a many week engineering project to deploy HA Postgres, while HA RethinkDB takes hours -- followed by some testing for prudence... our testing revealed that it does "just work" at least at our scale. We were overjoyed.
The docs for Postgres HA and clustering are also horrible. There are like five different ways to do it and they're all in an unknown state of completion or readiness.
Of course if/when we do want complex queries and more compact storage, we will probably offload data from the RethinkDB cluster to... drum roll... a PostgreSQL database. Of course that will probably be for analytics done "offline" in the sense that if the DB goes down for a bit we are fine. HA is not needed there.
TL;DR: everything has its limitations.
The closest thing(1) was dbase/foxpro. You can actually build a full app with it. Send email from the database? Yes. Is not that wrong? Is wrong just because RDBMS (2) made it wrong, not because is actually wrong. Why is better to split in separated languages/run times/models a app than one integrated?
(1): Taking in consideration that neither Fox or any "modern" rdbms have take the relational model to it full extension.
(2): A RDBMS is a full-package with a defined role, and limited capabilities. A relational-alike language will not be a exact replica of that. Not even is demanded to implement a full-storage solution.
The biggest mistake the relational guys have commited is to think always in terms of full-databases instead of micro-database. Ironically, kdb+ (or lisp? or rebol?) could be the closest thing to the idea (where data+code are not enemies but friends).
For example for our internal analytics/logs/metrics we use ELK and Druid but believe it or not these tools despite their purported scaling abilities are actually damn expensive. These new cloud "elastic" stuff cheat and use lots and lots of memory. For a bootstrapped solvent self-funded startup like us we do care about memory usage.
For customer analytics we use... yes Postgresql.
For counters and stream like things we don't use Redis we use Pipelinedb (Postgresql fork). For Cassandra like stuff we use Citus (Postgresql extension).
Some of our external search uses SOLR (for small fields) but Postgresql text search is used for big fields.
The only part of our platform we don't really leverage on Postgresql is the message queue and this because RabbitMQ so far has done a damn good job (that and the damn JDBC driver isn't asynchronous so LISTEN/NOTIFY isn't really useful).
I've read all sorts of hacks but I would love for someone to solve this for me!
I have a some slightly tangential questions, which I'd love to hear people's thoughts on: How do you decide where to draw the line between what's kept and defined in the application and database? For example, how strict would you make your type definitions and constraints? Do you just accept that you'll end up duplicating some of it in both places? Also, how do you track and manage changes when you have to deal with multiple environments?
I was reading the documents, looks like for every client request Postgres forks a new Process and uses shared memory model.
Using multi-processor threads/coroutines might be useful for scaling it further.
By an interesting coincidence I was given a CD with tons of software on it, including Delphi 2. I started playing with it and quickly realised I could create my own programs using that tool.
Later I went to the book store and found a book about it that I couldn't afford. So I came back every week to read as much as I could, then heading back home to try it on my illegal copy of Delphi. (I was 11)
Then I realised there was a help manual embedded in the distribution, so I learned as much English as I could so I could understand that manual.
Little by little I learned about conditions, loops, object programming and made a bunch of terribly crappy yet working games.
No amount of studies could have taught me as much as I learned through using that software. No manual work could have given me the fun I had when programming my very own games. Reading "Delphi" as a headline on hacker news made me feel nostalgic and I figured I'd share my story here.
So thanks Borland, the 11 year old me may not have paid to use that software, but it was enough of a revelation to make me the programmer I am today.
Ironically, I don't use Delphi anymore. Yet, I still help people with it and defend it when is possible.
Delphi is amazing. It only have a HUGE problem: His owners.
You can re4ad why Delphi fade away here:https://www.quora.com/Why-did-Borland-fail?share=1
"Borland lost its way when executive management decided to change the company to pursue a different market. ......In the height of the enterprise transformation, I asked Del Yocam, one of many interim CEOs after Kahn, "Are you saying you want to trade a million loyal $100 customers for a hundred $1 million customers?" Yocam replied without hesitation "Absolutely."
I have started using it in earnest and it is wonderful - and a company can never take it away from you; there's no "we're shifting focus..." announcements. No paid support for broken updates.
I honestly can't recommend it enough.
At work, I inherited maintenance of an in-house application a coworker that left the company wrote for our accounting department. In Delphi.
When people speak highly of Delphi, they always mention how great the IDE is. Personally, I am not a big fan of IDEs. I was not disappointed, but my mainly, what I do is read and edit the source code.
And I have to admit, that part was a pleasant surprise! I had never looked at or touched Pascal code before, but I was able to make a change to the source - and it worked, the very first time! - within a few days. And the majority of that time was spent figuring out how the code was structured and what the accountants wanted me to do (they always talk to me as if I knew the first thing about accounting).
But after that, it was smooth sailing. It sure helped that my predecessor wrote very readable code, but it seems that ObjectPascal made it very easy to write it that way.
Or, more briefly: Happy Birthday!
 The application is about a 10 KLOC in total, which is not small in my book, but not "very large", either. Also, I was both a developer and a sysadmin and a helpdesk monkey, so the phone was ringing about every fifteen minutes.
For someone that uses it for home projects, $916 for an upgrade is out of the question.
Rambling here, but... Delphi 3/5/7 were incredible design packages. I feel they lost focus when they jumped on the .NET bandwagon - maybe it is just my perception, but maybe some of their internal developers were less focused on native code. Anders leaving was also a big hit. Kylix was yet another distraction.
The choice was between learning Delphi 8 for .NET or learn Visual Studio .NET. I chose MS.NET because MS is the custodian of the .NET Framework. They are the ones on the driving seat, not Borland.
Looking back, I don't regret the move because Delphi is dying. Posts requiring Delphi are nowhere to be seen on job sites in South Africa.
Hello World ran to about 100 kB.
Funny thing is, I am just getting into Go, and you know what?
They still need, once every 6 months, to compile the EXE file with a small change.
I get paid monthly just to press CTRL+F9 (generate new .EXE file) every 6 months, basically.
I liiiike it :)
It's sad that RAD isn't popular anymore. 1995 - 1999 was great with Win95 era, everything was so consistent, good documentation. Then Microsoft realised their "The Microsoft Network" lost against the open free WWW and then the announced dotNet (which took until 2003 for them to release v1) - that was the beginning of the end of the great Win32 platform and RAD. HTML with Frontpage and Dreamweaver was just an okay RAD andbthe situation got worse with "no tables, use div" and XHTML 1/2 movements.
I know of a few companies in Germany and Netherlands that still use it, but it is hard to get offers.
And we had to wait 15 years until .NET started to offer a compilation model similar to Delphi (only for UWP apps).
While Java kind of outsourced it to third party JDK vendors due to Sun's attitude against AOT compilation, oh well at least Java 9 will bring the first steps towards support it.
Fast forward 20 years, and I'm a hardware developer using Altium a lot. As some here may be aware, Altium is a multi-gigabit piece of CAD/EDA behemoth written - and still maintained - in Delphi. 3D, DirectX, everything in Delphi.
Just yesterday and today I had easily reproducible BSODs by using a very basic feature (routing nets). Memory leaks galore - most people I know have the habit of shutting Altium down now and then just to avoid a crash. Success is low - it crashes a lot, I'm getting tired of unhandled exceptions windows.
And this way, I realize even great development tools age badly. Of course this is not all Delphi's fault, but it shows how tech is a Red Queen's race: you must run faster and faster to keep in the same place.
Better: the Delphi component libraries made writing a GUI application easy. I've never seen anything of their quality and ease up through today, and I still miss being able to get an application running by just subclassing some standard components and writing the core logic I needed.
Such a shame it went downhill after the .NET stuff.
Edit: this is a historical reference... search for "Oracle Delphi Python"
Fast forward years later, and plenty of intermediate languages (C, C++, Java, ...) I got hired into my first professional software developer job ... as a delphi developer in a company with a big multi-million line delphi codebase they wanted to migrate to the web. I had done some web development, so I ended up writing features on both the delphi and web teams, often the same feature. So I became intimately familiar with the trade-offs of delphi vs web development.
The thing about delphi: it was/is insanely productive. In the beginning it took about 3 to 5 times as long to write a comparable feature on the web side. It ended up driving me to research cutting edge web dev techniques to find some way to approach the productivity levels that delphi gave. In the end we almost got there, using rich frameworks and a component-driven UI. But to this day the delphi team can still get a feature done faster than the web team, and that's despite an IDE which is much weaker than webstorm/intellij. Object Pascal and the VCL are just that good.
However, I wouldn't do a new project in Delphi. You're locked into an ever more expensive product with an uncertain future, and the productivity advantages just aren't worth it anymore. You can get close enough using an open source dev stack, and the value of having all your tools be open and free is significant.
It _is_ great IDE/language.
Delphi 1.02 I got from some CD in a UK magazine sold in the USA. They gave away free copies of commerical software for buying the magazine for $10 or so.
I did better in Visual Basic because I got jobs that required it.
There's a lot a legacy application written in Delphi here. Some old programmers, that only know to program in Delphi, may even build new apps with it.
It may not be that modern today, but there's the "pay the bills" mindset.
I think Delphi will die, but only because it costs a fortune today. If it had a resonable price, many people would continue to use it indeterminately.
Delphi vs VB TASM vs MASM Borland C++ OWL vs Microsfot C++/MFC Philippe Khan vs Gates
I eventually had to move to VB when my Delphi job dried up. VB felt like a downgrade. I remember constantly referring to Dan Appleman's book so I could use Win32 APIs to work around the limitations of VB. I'm not knocking VB as it's one of my favorite tools too, but Delphi was the cream of the crop.
Maybe it really is great, I don't know. I was a bit surprised looking at Tiobe (http://www.tiobe.com/tiobe-index/) that they rank it as the 9th most popular language right now. Where is this actually being used? And the price tag is incredible!
Delphi why won't it die? (2013) (stevepeacocke.blogspot.com)
Did a search in hn.algolia.com just now to find the thread, and it was the top result when the setting was "By popularity":
But the real issue is rapid database applications. Being able to create business apps very rapidly was always Delphi's thing. It was a really fantastic replacement for Paradox and brought Turbo Pascal and Paradox together to create something amazing.
For that there still isn't anything. There are a lot of things that are close, but nothing with the same level of power and speed. The closest thing I've found is Django. Hopefully that clarifies: it's not about GUI anything.
The latest answer has been annual licenses.
FWIIW, where I work, I started using Delphi 5 when it came out. As of now we have about a dozen D5 desktop applications all essential for the core business that are in active use and supported as necessary, running happily on Windows 10. I don't see why it won't continue like this for another 15 years... The bosses (who are non-programmers) don't really care that D5 is out of fashion - because it all just works (and rocks).
The only reason I couldn't upgrade to a later version (Unicode is one thing that would be useful) is I am stuck with one critical and long discontinued grid component.
Looking back I am not sure how we could have lived w/o Delphi. Well maybe it's an exaggeration, but certainly life would have been more difficult. Everything else from before .NET/C#/Windows Forms looks like a nightmare.
Has there ever stopped being a need for this?
For me it was less rosy.
I remember my boss trying to get me started on Delphi.
We basically had two or three Delphi developer workstations available because setting up one that could compile the projects we had in our vcs was a three day task - and it was only possible with the help of our resident Delphi consultant.
Stuff like that has made me love Maven and Java.
Same goes for Visual Basic that I once used to love.
Still I feel I could have loved it if I didn't start with 10 years of accumulated references to unsupported packages. :-/
Not convinced it's still best as a practical language though.
The book with the IDE was free and easy enough to comprehend and it introduced all the basics of OOP.
Great times. I wonder if lots of people have had this same experience with VB .NET
Are recent versions of Delphi actually usable? I haven't tried anything beyond 8, and from my (admittedly rusty) experience the best version was 6.
To taste the cross-platform IDE for Rapid Application Development today:
"Why use Lazarus?
With Lazarus you can create programs which do not require any platform dependencies . The result of it is the user of your program does not need to install any further packages, libraries or frameworks to run your software.
 Linux/BSD applications may depend on GTK2 or alternatively QT. Some add-on packages may also add dependencies of their own
Can be used in commercial projects
Some IDEs restrict their license to only non-commercial development. Lazarus is GPL/LGPL  which permits using it in building commercial projects.
 LGPL with additional permission to link libraries into your binaries. Some additional packages come with various licenses such as GPL, MPL, ..."
What do you get 22 years later on an Intel i7?
Would someone please point to his/her blog saying that Tcl/tk being the universal scripting language?
> Without this, I usually get bored. More importantly, companies that dont have this usually have a hard time recruiting enough great people to work with them, and thus struggle to become very large.
I find this one interesting. When Google started, there were many incumbent search engines. Their mission was to provide better results than what existed, i.e. improve on an existing product.
Uber's "mission" was to get privileged people with smart phones and disposable income a sexy/cool ride home from the bar.
Facebook was a social networking site targeted at college students...what was it's mission? To allow people to see what that cute person in their Psych class was up to this weekend?
Snapchat...the clear and important mission to allow for safely sending risqu pictures?
While I believe Sam's belief that they WANT to work with companies with clear important missions, that doesn't seem to be a/the deciding factor in becoming "very large", or attracting great people. If anything, it seems like the "spaghetti at the wall" approach is what happens - some startups gain momentum, and then it's the high growth trajectory that attracts a ton of talent.
Sounds interesting. I always kinda hated how people fresh out of high school with zero credit can have so much money handed to them for going to school. While if they wanted to get a business loan or buy a house they would be much more scrutinized.
I wonder with that money instead and right mentorship if they could come out ahead of their peers who used that money for college instead.
I know some people who just go to college because they feel like they have to, and get degrees they don't even use. Imagine one spending 4 years going into debt, the other coming out a head with a real world profitable startup. One thing I always hate about school work is I feel like I'm just doing stuff for the sake of being graded or being busy. I rather spend my time creating stuff that's more real instead, learn as I go. There's a bunch of free and even paid training on-demand on the internet to fill in your skill gap.
I think the "startup community" needs to have a serious conversation about this . If we agree that machine learning is going to be a major underlying technology moving forward and that as a result, data is the new Oil , then the top 5 technology companies serve as the new Oil Barons.
It goes further than that actually in my mind, as these firms are disrupting themselves and innovating as fast if not faster in many areas than small startups can. They are in every industry and if they can't move as fast, then they buy the talent or the whole team. Granted, a small team will always move faster, but I think that these mega corps know enough now to know that they can pivot on trends if needed.
I do worry about the likelihood of a startup that could beat google, facebook, microsoft etc... with humble roots. Maybe if a "startup" came in with 100M+ in funding, but more than likely at this point those companies would be funding that startup, if only for the intelligence value.
So while "we've seen this before - someone always comes out of nowhere" which gave rise to the current crop of major companies, it would have to be pretty left field at this point for a company to compete with the major players without investment from an arm of theirs. Considering Snapchat is the biggest IPO in the coming year, I don't have a lot of hope that there will be something coming through.
First time I have seen those numbers, very impressive. I thought there were a lot less users.
Which innovations over the last 2 years most indicate a new age of artificial intelligence?
I undertook some research, and noticed:
3 Months Ago: Google Translate AI had a breakthrough 5 Months Ago: Amazon/Facebook/Google/IBM/Microsoft launch an AI Partnership 1 Year Ago: AlphaGo has a big win and OpenAI is founded 2 Years Ago: Google (now Waymo) performs its first public driverless ride 3 Years Ago: Google acquires DeepMind and interest in Deep Learning takes off 6 Years Ago: IBM Watson wins Jeopardy
Climate change may be the defining issue of the next century, and it would be a shame to be stymied by the communications challenges. I hope you'll consider it.
>> funded over ... 1,470 companies >> more than 50 of our companies are worth more than $100 million each.
can you give a broad overview of what software YC creates? What does it do? is it internal tracking and stats, or exclusive libraries that YC-funded companies get to use to build their products?
* "Bay Area: Tech job growth has rapidly decelerated" (from here: http://www.siliconvalley.com/2017/02/10/tech-job-growth-slow...)
* "Tech Jobs Took a Big Hit Last Year" (from here: http://fortune.com/2017/02/16/tech-jobs-layoffs-microsoft-in...)
The first of those two articles has been submitted twice to HN with zero comments both times. The second has not been submitted at all as far as I can see.
Is it possible there is a bit of a collective head-in-the-sand phenomenon going on?
Can you please elaborate on that?
> we have to work harder to find people doing a startup for the right reason: to bring an idea theyre obsessed with to life, and willing to do something unreasonable to see it happen
Here's a question for you (or other YC folks) on YC recommendations.
Give that I'm planning to apply for the next cycle, would it benefit me to ask for recommendations from 7 or 8 folks who I've worked with in the past and have strong relationships with?
It's not totally clear to me whether it would be kosher to ask for those or if it's something you'd rather that people do spontaneously. I'm also wondering if 7 or 8 is too many given how little time I know you have to spend on each applicant. Thanks!
It's also the new platform for lots of important online communities to discuss and disseminate information.
Oh and a remark on the letter's typography: the double spaces after periods should be single spaces.
> We are only about 30 years into the age of software, about 20 years into the age of the internet, and about 2 years into the age of artificial intelligence.
Software has been around--and actively making bank!--for over 40 years. The internet has been around longer in the form of academic institutions, email, and BBS than you mention.
And AI has, of course, been around at least as long as software.
I appreciate you have a message to send, but maybe don't ignore the path we took to get here.
I'm guessing what he means to say here is that if x% of the applicant pool is woman/black/latino, then x% of the accepted pool is woman/black/latino, right? The way it's phrased it sounds like every woman/black/latino who applies is accepted
Can you expand on the "obvious" and "destructive" force? What exactly did Thiel do that is obviously destructive to our whole society? You mean Palantir? Wasn't that there for a while?
This allows you to use a custom firmware developed for the Intel 5300 wireless adapter and read the CSI values with each packet.
Every 802.11n implementation that I am aware of keeps a CSI vector (IQ values, typically as integers) within the wifi chip. Both the Wifi AP and STA do this. The CSI vector is updated with every packet, using the training data at the beginning of the packet. (802.11 is CSMA  so there is a fixed transmission to start the packet)
In other words, Intel has this nice tool for one of their (now somewhat dated) chips. But CSI is not restricted to Intel chips. Atheros chips have a decent but limited CSI readback method, not quite as nice as Intel's . But CSI has been used for experiments on all major wifi chips out there.
With 802.11n this is used to determine the quality of signal likely to be received on each sub-carrier within the signal.
CSI is useful for many other things: RF experiments, indoor position sensing, and now apparently also password cracking.
I've been a part of a similar paper that detected exact keystrokes. This one seems to build on a similar idea. The thing to keep in mind is that these systems need user and environment specific training. That is if the user is changed or the user or something in the environment moves, the system needs to retrain.
Of particular interest: It can determine breathing patterns and heart rate.
With the Samsung phone, which has a much lower 1-digit recovery rate, it seems that it would be closer to 6% on the first try, and 20% by the twentieth try.
ETA: This was meant to be glib, given the frequency of such stories seen on HN, and the many children below are quite correctly pointing out that the real moral is https://news.ycombinator.com/item?id=13645694
For awhile now, the "obesity paradox" has been a thing, where segments of the population who are a little heavier than one would expect actually have the best all-cause mortality rates.
Recently there's been some pushback on this "paradox", but the one I want to call attention to is here:
The problem with the obesity paradox is that it's been based on the flawed BMI. In this study, they actually did DEXA scans of elderly women's body fat percentage, and those with the highest BMI and lowest body fat percentage had the best all-cause mortality rates.
This suggests plenty of calories plus strength training is in the running for a longevity lifestyle. This makes sense intuitively and if one is familiar with the panoply of beneficial physiological effects from exercise on the human body. And, it would not come as a shock to find that increased strength from greater muscle improves, e.g., balance and coordination to prevent accidents in the elderly, nor that increased muscle mass provides a protective tissue reserve for fighting disease without the concomitant downsides of adipose.
Ancient philosophy also says about fasting 2 days a week. The Autophagy process provides a scientific foundation for those claims.
> Among the conclusions from the study was the confirmation that prolonged semi-starvation produces significant increases in depression, hysteria and hypochondriasis as measured using the Minnesota Multiphasic Personality Inventory. Indeed, most of the subjects experienced periods of severe emotional distress and depression.:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally). Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.:123124 The participants reported a decline in concentration, comprehension and judgment capabilities, although the standardized tests administered showed no actual signs of diminished capacity. This ought not, however, to be taken as an indication that capacity to work, study and learn will not be affected by starvation or intensive dieting. There were marked declines in physiological processes indicative of decreases in each subject's basal metabolic rate (the energy required by the body in a state of rest), reflected in reduced body temperature, respiration and heart rate.
Calorie restriction OTOH is something that gives mixed results, and I suspect it may be related to the diet adopted and how it is implemented (ie, LCHF vs LFHC, fasting vs frequent small meals).
I suggest anyone interested in food/health to spend some time on google scholar searching about these approaches, it's enlightening.
Everyone is focusing on longevity when quality is probably more important. 120 years as a couch potato would be shit compared to 60 years of being active every day (whether physically or mentally).
People are against TRT because it shortens lifespan, while completely ignoring the quality of life improvements, which is just stupid.
Give me something to burn fast and bright, not slow and dim, thank you very much.
If we very boldly assume these relationships are causal and predictive to individuals, it follows lower energy consumption -> longer life. Caloric restriction might be one way to do this.
However, with lower metabolism, stuff in the body tend to get older. This will make them fail more easily, then you are likely to get sick, which will shorten your life. Therefore the optimal strategy to maximum life expectancy seems to be control metabolism (by exercise / calory intake etc.) to a certain point where you are just unlikely to get sick, then stop.
They take a very Hallmarks of Aging view of the causes of aging, which I can quibble with around the edges, but it is good to seem more people putting their money into the game of building ways to treat the causes of aging.
The principals at Apollo became involved in this space and raised a fund both because they are enthused by the field of therapeutics to treat aging and want to see it succeed, but also because they recognize the tremendous potential for profit here. The size of the market for enhancement biotechnologies such as rejuvenation treatments is half the human race, every adult individual.
Publishing a magazine on aging research is a way to help broaden their reach within the community, find more prospective investments, talk up their positions, and raise the profile of the field as a whole, all of which aligns fairly well with the broader goals of advocacy for longevity science. Many hands make light work, and we could certainly use more help to speed up the growth of this field of research and development.
I'd be a bigger advocate of intermittent fasting. This has also been shown to improve health and longevity.
What studies (HMS,Johnhopkins) have shown is exercise (blood flow vital to removing waste and in result inflammation) and sleep (recovery) is equally as if not more important than diet.
These focus points are the same in fighting age related diseases: Alzheimers and sleeplessness.
'Therapeutic Fasting - Solving the Two-Compartment Problem'https://www.youtube.com/watch?v=tIuj-oMN-Fk
First, n anecdote:
1.) One year I decided to run a marathon, and joined a running group including some elite runners.
One was a former state champion. In that particular year he'd run an Iron Man, a marathon, and the Goofy Challenge. That is a big deal.
He was carved out of stone. At running clubs, women would surround him. They'd paw at him, giggle as they pointed and touched this man body of steel.
After witnessing this for months, once at a pub I saw him order, IIRC: 2 hotdogs, a hamburger meal (with its own sides), and sweet potato fries, which he washed down with several beers.
I was shocked. I asked if he always ate that way. He said he just eats what he feels like eating, with one exception: added sugar.
So that's what I do. I eat what I feel like, with (3) exceptions. Small portions, no added sugar, and limits on artificial flavours. I've seen great results in energy, mind and physique. It appears my body is actually really well tuned to tell me what it needs without me having to do anything at all--as long as I follow those three rules.
2.) We need to be careful what we call actual science. If you can't rigidly follow the scientific method and test your theory to see if it matches with nature, it's not science.
3.) Nutrition is ridiculously important to our well being. It's more important than sex, shelter, and ego. Our bodies have been evolving for millions of years to learn how to tell us what they require. It's arrogant to declare yourself to know better here without having hard scientific evidence of it. That's what we're seeing now in the "scientific" literature. Butter is no longer a heart risk. Eggs are now healthy. All of the sudden sugar is the devil. Etc.
Health = Nutrients/Calories!
Discussion on HN:
He existed and even thrived on a diet of subrancid cheese and milk in every form, coarse and hard bread and small drink, generally sour whey, as William Harvey wrote. "On this sorry fare, but living in his home, free from care, did this poor man attain to such length of days".
Thomas Howard brought him to London to meet King Charles I.
Parr was treated as a spectacle in London, but the change in food and environment apparently led to his death.
Caloric restriction improves health and survival of rhesus monkeyshttps://dx.doi.org/10.1038/ncomms14063
Will calorie restriction work in humans?http://dx.doi.org/10.18632/aging.100581
As caloric restriction is being more and more shown to be the only efficient method to increase longevity, I suppose more and more people will try it and soon enough we'll get to break records. Time will tell.
>much of it from plant-based material like the Japanese sweet potato, their staple food, in contrast with the rice-heavy cuisine of the mainland.
> But anyone testing the beta who links their iPhone to iCloud and wants the same level of privacy Signal has always offered should consider an extra step, too: Disabling a setting that uploads a calls metadata to Apple. The beta upgrade to Signal will use CallKit, Apples framework for allowing VoIP calls like Signals, to be integrated more completely into the calling functionality of the phone. But that also means calls will be recorded in the iPhones call log and, for iCloud users, shared with Apples server. iOS treats CallKit calls like any other call, however that also means some information will be synced to iCloud if enabled, Open Whisper Systems warns. This information includes who you called and how long you talked.
I have a technical question:
> We immediately realized that protocols like SIP, which traditionally required holding open long-lived connnections in order to receive incoming calls, were not going to be compatible with the mobile environment.
Ok, so far so good.
> Instead we built our own simple REST-based signaling protocol [...], and used push notifications instead of long-lived connections to notify the client of incoming calls.
So, no long lived connection but a "simple REST-based signaling protocol". How is that supposed to work without a long lived connection?
> Actual push notifications hadn't been invented yet, though, so we created our own push infrastructure by sending encoded SMS messages that the app would silently intercept and interpret instead.
OK, that's pretty clear again.
> Over time, we switched to push notifications when they were created by Google and Apple [...]
But don't push notifications basically work over a long lived connection? Of course it's better to have just one long-lived connection to Apple instead of one for every communication App, but in the end if you want real time signalling in a mobile environment you won't get around a long lived connection, don't you? At least that is my understanding, but I'm always happy to learn something new.
On the other hand, Signal SMS support is broken (datastore and MMS issues), they don't want to bring SMS support to their "desktop" app (which STILL needs you to install Chrome to work) and they still don't support the use of multiple devices. Instead they're wasting resources implementing video chat which noone really asked for and won't help the adoption nearly as much as having a secure drop-in replacement for SMS client. Even worse, enabling SMS support will prevent any other SMS apps that let you have conversations via the computer from working.
It seems like they're actively trying to shoot their own foot.
The best Signal can do is to make a proper desktop application (even Electron would do now, even though Telegram's approach is significantly better UX wise) and make SMS seamlessly integrated into it both on the phone and on the desktop. Video chatting is nice, but it's not where the most important requirement for cross-platform private communication is.
Video was very smooth as well.
And it still frequently happens that I get the same text message from Android users six times. Where are the priorities of this project?
I will be interested in doing Video calls in a bigger screen, like iPad for example, is there any plans to expand this ? or is there any technical limitation like the phone number ?
This config works for me with other VoIP apps. I tried earlier but was not able to get a call through...
Sometimes I'll MMS from other people inside other chat windows where they don't belong.
It's also really annoying that I have to attach a an item BEFORE I enter the message. Gets old after while.
Apps crashes quite frequently.
Using T-Mobile Wifi Calling feature makes MMS inconsistent, though at least it allows me to use.
Signal should add support for Windows 10 Mobile as an app for all platforms, and SMS/MMS should carry over, not just Signal-to-Signal.
I know there's a few other bugs, but I can't think of them at this moment.
BTW, I use the LG G4 unrooted, custom recovery, and unlocked bootloader
#1: the dam is managed in the winter about 1 million acre feet below the top, for flood control purposes. The spillway has been operating all winter long.
#2: The operator cannot decide to use the emergency spillway. Water just goes over it when it approaches the top of the dam.
#3: no water has gone over the dam. That would destroy it.
One piece that's missing from this analysis is upstream precipitation. You are sensing precip at Oroville itself, but water in the dam comes from everywhere upstream (3600 square miles according to ). As well as (potentially) snowmelt.
People doing forecasting of reservoir levels will be integrating distributed precipitation information, snowpack, and temperature, with a soil runoff and routing model to get predictions.
Here's a great article on climate change's effects on our water infrastructure in light of the Oroville Dam's situation: https://www.nytimes.com/2017/02/14/opinion/what-californias-...
An example - the bulk of California's water storage is in the natural reservoir of mountain snowpack. The large volume of precipitation this year, much of it as high-altitude rain falling on snow, has caused an extraordinary amount of snowmelt in a short period of time. The natural reservoir of snowpack is rapidly released into our manmade resorvoirs and strains or exceeds their capacity. This cycle of alternating extremes, drought to wet, is expected to continue, and our water infrastructure's capabilities must be planned in light of that.
Dry Creek basin seems to have some points were it's just a few meters of a hump over the emergency spillway level.
However, this seems a little off: "Based off of our estimate, for every inch of rainfall at the Oroville dam, 136,790.5 acre-feet will be added to the reservoir." The relationship probably isn't linear. Much less of the first inch of rain makes it to the reservoir vs. the 5th inch of rain.
Hopefully, the next iteration will project how many inches of rain it will take this weekend to top the spillway again.
The money shot is at 2:25
Acre-feet. Of course.
The only thing missing here is nautical miles and football fields.
They reduced the mandatory evacuation order but more and more emergency personnel are arriving. There are (unfounded at the moment) local reports of water seeping underneath the emergency spillway weir.
Folsom dam has turned on the spill at insane rates the past few days. I wish people living in Sacramento county (my parents) would take things more seriously.
I always feel a sinking feeling in the pit of my stomach when I read things like this.
Is any labour of love worth your potential future? As much as I admire his dedication, it's not a choice I'd make... :(
Maybe it's just because I'm a bit of an ops guy, and it feels like leveraging a single point of failure. :P
>Although I can't afford to host this content (it weighs in at hundreds of gigabytes)
I really urge you to get in contact with Jason Scott about getting a copy of your archive to the Internet Archive. These scans and dumps could be priceless and it makes sense to have as many backups of it as you can.
At any rate, thank you very much for all your hard work, time and expense in doing what it is you do byuu.
Aufgrund der Streiks bei Lufthansa im Dezember 2016 wurden fr Sendungen in die USA Ableitungen ber alternative Transportrouten notwendig. Unter anderem ber den Seeweg. Das fhrt aktuell noch zu strkeren Verzgerungen bei der Zustellung von Economy Paketen in die USA.
Due to a strike at Lufthansa in december 2016 packages with destination USA need to be redirected using alternate transport routes, amongst other by ship.This results in even bigger delays delivering economy packets into USA.
If you have the DHL tracking number you should try tracking.
There's also the DHL facebook page where you can start a facebook chat, with good luck your supporter will speak english. (Just try!) https://www.facebook.com/DHLPaket/app/1609168226005546/
From what it looks there's a high chance your package is on a ship somewhere. And even more importantly, the sender should create a "Nachforschungsauftrag" with DHL (ha, one of our bureaucratic german words I can't even find a translation for - it's a request for inquiry of a lost package). Even with domestic packages over here the tracking information sometimes is plain wrong, and I expect it even more so to be wrong if they don't use their usual ways of transport. (No tracking of all those packages in a shipping container on a freighter.)
Good luck. (Btw, once I have given up on a package from the US to Germany after 6 months. The day after I gave up and purchased an alternative product my packet arrived ... UGH).
This makes me think of another idea to avoid shipping the cartridges themselves around: an interface that allows eventually accessing them over TCP/IP. It could be exposed as a block device or a custom protocol, it doesn't matter as long as you can essentially send an address and get some bytes back, allowing to read the entire address space.
I did my best to contact them and ask for the package to be found, but nobody wanted to know. I eventually spoke to a woman in eBay's customer services, but nothing ever came of it. In my situation, at least I got my money back, and I know it's not quite the same as this poor guy's, but I can understand the frustration of dealing with tracked shipments. Especially for rare items that are difficult or impossible to replace - all you want is the package itself. Shipment tracking is supposed to prevent this happening - after all, if the package gets lost, it's likely in the same place as the last scan! You'd think the money the couriers have invested in the tracking system would incentivise them to use it to find lost packages.
Since he's handling the cartridges personally, it's a bit of a shame he has to give them all back. Wouldn't it be kinda neat to make an uber-cartridge that contains all chip configurations from every Super Nintendo cartridge in existence, so it could play any game on the hardware that game expects?
My story with them:http://webtrack.dhlglobalmail.com/?id=27838&trackingnumber=G...
That's more than one full month for a package that contained nothing more than a teeshirt.On top of that, the package never arrived at the final destination, and might be stuck in customs, for all i know. I never got a notification, and will actually contact DHL soon myself, because their service is absolutely laughable.
As a logistics company, they had one job. And they failed.
They said in the future, if you want to be sure of a delivery, you have to send it registered mail: with that, every time a transfer of any kind occurs, it is done with paperwork, signatures, tracking, and under lock and key. Of course it costs an arm and a leg too.
Found this odd. What is scary about traveling to a presumably a big city in one of the richest countries in the world?
That's some really high expectations there.
If you want to send something to another country and aren't willing to lose it? Buy a plane ticket.
It's essentially impossible for the carriers to prevent this from happening given their volumes.
They maybe able to open a case for you
end of story.