After removing the laundry, take your t-shirts and stretch them yourself, one by one, when they're still slightly wet. Grab them with two hands symmetrically, stretch horizontally, moving your hands down along the shirt. Do the same vertically, and with the sleeves.
Do not use a machine dryer, just a regular standing dryer like [1].
Put your t-shirts carefully, symmetrically on the dryer, and once dry, put them on a hanger. If you follow this, you will not have to iron them at all.
Source: been doing this for 4 years and I didn't touch the iron since. I have all t-shirts 100% cotton (though I buy only high-grammage ones) and they all seem brand new and ironed (the only exception being one particular brand whose collar looks bad unless ironed, I stopped buying that brand). YMMV of course.
[1] http://ecx.images-amazon.com/images/I/41oWjx2Q-mL._SY300_.jp...
Cotton is just a lousy fiber. On the other hand, wool is a strong and resilient fiber. It also never needs to be washed provided it isn't stained.
My wife knit a wool sweater for a close friend of mine who spent 6 months as a bosun on the tallship the Lady Washington (the Interceptor in the Pirates of the Caribbean). Fresh water is scarce on a tallship so showers were infrequent. He came home during Christmas and I smelled the sweater which he claims he never washed and it smelled fresh. Surprisingly, it also kept him warm and dry on the open ocean. I later learned that Irish fishermen have been wearing wool sweaters at sea for generations.
Wool is the fiber of the past and future.
While most modern dryers offer a choice of temperatures, the big knob mostly controls a humidistat-based target. I personally equate the "very dry" setting with "shrink beyond usability".
I'd expect that removing clothes while still damp would be more important to avoiding shrinkage than reducing the heat, but I'm no T-shirt scientist. (T-shirtician? T-shirtologist?)
I wonder if there are any companies that sell inexpensive custom t-shirts? Provide your measurements, specify desired fit, neck type, color, and fabric, and order exactly what you want.
As someone very hard to fit for pants (28" waist and cyclist thighs), I would be thrilled if you also do this for jeans and shorts.
The "manufacturing variance" chart jumped out at me as looking fairly unnatural: there's a variation in width or variation in length but very little points that mix. Then I noticed that we're talking about just over half an inch in each direction.
How much of this effect is variation in your measurement?
The Uniqlo shirts are a cotton/polyester blend while the BR shirts are 100% cotton, which explains the help that durability of the synthetic materials provide.
It would be great if we could get some data on which brands have the most and least variance and which brands expand and shrink the most over their lifetime.
Edit: I'm curious about the downvotes. Are people appalled at my lack of taste in t-shirts? :)
Their charts expose this onconsistency. Some brands Mack Weldon, is more consistent than say American Apparel.
Even when objective measures like pants waist size in inches, typically a size 32" actually 34" --I guess to make people think they are thinner tan they actually are.
The size charts is a real eye opener. I know that Abercrombie carries smalls that fit me fine, but Zara was an unknown to me. Apparently, their tees are also reasonably priced and look pretty good...
Thanks for the post!
Wash as usual and then hang on a plastic hanger until dry. So, don't use a dryer and just let them hang, starting when they are still wet from the washing.
Also works when hand wash and rinse but don't squeeze out much of the water, that is, hang them while they are still wet enough to drip.
Also works with knit polo shirts.
What if the fabric was rotated 90 degrees upon manufacture, wouldn't this eliminate this problem?
The shrink pattern is related to the orientation of the thread build of the fabric used is it not?
After around a year or so of implementing questionable features, I attempted to get approval for updates to old, well used features to improve them (stability and convenience focused, really), but was shot down. This wouldn't sell the software, because it worked well enough, and we needed more revenue more than retaining old customers. At that point I understood that after the software is sold the customer will be too ingrained into the product to leave without financial repercussions.
A while later, we got bought out by Big Company, so that strategy apparently worked. BC doesn't give half a shit about anything we ever did, and we piled on the features release after release with little concern about anything else. I tried a couple times after the buyout to get approved for existing product improvements, but always got shot down.
I continue to find it odd how the company can be so profit oriented, and yet so averse to improvements. I suppose I'm just wrong or don't actually understand. Either way, it makes it very hard to care about my work these days.
0) Abuse.
1) Executives cut projects: a lot. The budgets are so insane for games executives need to constantly trim budgets and shift things around. It is common to walk over to an artists desk and inform them the art they have worked on for 2 years wont be used. I am convinced telling a wife her husband has passed is the same feeling.
2) The budgets have exploded. My last project for an iPhone game was well over 4million dollars.
3) Complexity is compounding. My last team (for a prototype) consisted of: AI guy, graphics/C++ guy(s), gameplay guy, Art TEAM (vector and raster) and project managers. The art pipelines alone will suck the budget dry.
4) Pay is low. Since you are starting fresh each project (see 5), your working knowledge of the system is similar to someone new. Promotions, salary increases, etc don't make any financial sense (see 1) unless you are a rockstar. The new kids walking in usually burn out and quit because they don't understand the massive shit show the industry is. EA's managers just grind people until they can't walk. Disney is a sweatshop.
5) NOTHING is reused. After your second project, you quickly realize the AI you created for fish has nothing todo with with your AI for a 3d shooter. The asset pipeline you created for a soccer game doesn't translate over to a racing game. Game companies are full of dead code repros. People try to create/use repeatable platforms, but then the game designer guy will walk by and say "Hey is that the newest unreal engine?". In games: Anything reused is quickly spotted as reused. This is why games that have a good series going do really well financially. GTA what like 15?
6) Success is low. After a few years into a project, someone will say: "But its not... fun". Welp, good luck fixing that. Or plan on having it rot in some terrible online store.
7) Rockstars. Executive: "OMG you wrote the AI for GTA2 in 1998??". Welp, this guy is now your boss. AND, because games are almost always a luck play - this "Rockstar" will teach you absolutely nothing.
My takeaway:
I have talked with guys in the game industry that have been in it 20+ years and asked WTF. Basically, lifers are like high school teachers. They are abused and underpaid: but they love what they do.
A good description of a lot of big corp projects. Do people working on large open source projects eventually feel the same way?
This is my experience, too. Without autonomy and ownership across a whole project it's very easy for people to get tunnel vision about what's valuable. This causes general harm to both the team and the outcome of its project.
I'm not sure how to lessen the effect other than perhaps by making projects small enough that they can be worked on by just a few people and using this phase to establish a kernel of good ideas and team cohesion.
Perhaps there might be another structure where the tools that are provided to the team are literally so good that the main project can be done by just a few people working on everything together. (Idealistic vision here.)
But feeling of being a little cog in the machine aside, some of what is said here is about failure of management: communication problems, useless meetings, bogus decision process, lack of visibility of who is impacted by a decision, etc. It's true that big projects are more difficult to manage than small ones, but in truth a bad management or bad coworker dynamics can destroy motivation in big or small companies alike. I have worked in a few startups and two indie game companies and all were plagued by mismanagement as much if not more than my other experiences at a bank and at a big cell-phone company. I may have been unlucky, but it may a simple truth about the programmer's job: working with other people is hard and team dynamics is very important.
This is why I left my 'dream job' of work working on a AAA MMORPG. I came on board early on as the first member of a 'NetOps' team, a senior linux systems administrator, which later split off and grew into a number of very large, very specialized teams. My loose definition of 'dream job' at that time was 'large scale' and 'video games'. Cool!
It took a few years for me to redefine what a 'dream job' really meant, and being a drop in a bucket was not it, so I left and moved on (slowly) to freelancing, and haven't looked back.
The last AAA game I played was Oblivion, which I couldn't finish. I haven't really played a AAA game since, and have only played two video games all the way through since (Braid, and Monument Valley).
When the OP talks about working on a project so big that no one person really "grocks" the whole thing I can relate, but I also want to say "it shows".
IMO, the current state of AAA games is shit. I think the reason they are this way have to do with what the OP is complaining about, the originating vision of the game comes from Marketing not an artist, and no one person has vision for the game. Maybe video games just have too many resources at there disposal.
I think I read somewhere that either Ocarina of Time or Mario 64 had double or triple the playable content of the released game and Miyamoto had a perfectionist eye for the game and was merciless in what made the cut.
Resource constraints are a good thing, IMO, as it forces people to make a razor focused product that trims the fat mercilessly.
Having unlimited resources is the enemy of good decision making, and it shows in the current state of video games (and film too). Games and movies are just too long/full these days.
If the latter, fine, at the worst they are unoriginal. If the former, then they haven't ever seen the movie, or, don't understand the movie and the absurdity of the title character nonetheless of "loving a bomb".
Or, this phrase is common and I erroneously associate its origin with the film.
In every case but the last, it irks me, but for no good reason ultimately.
As soon as you get people working on a project that are too specialized, no matter the size of the team, you inevitably get conflicting concerns. I think it's very important for managers to understand what those concerns are to be able to take the right decision.
I also think that even specialized people should have some knowledge of other specializations (e.g. designers that understand programming, and vice versa). On very large projects, this is impossible as there are just too many fields, but still I value very much "general knowledge" for that reason.
Anyway, good luck Maxime in your endeavors.
"On large scale projects, good communication is simply put just impossible. How do you get the right message to the right people? You cant communicate everything to everyone, theres just too much information. There are hundreds of decisions being taken every week. Inevitably, at some point, someone who should have been consulted before making a decision will be forgotten. This creates frustration over time."
This is an issue I've wrestled with over the years - too small a company and your resources are limited, too large and progress mires, and it mires because of communication.
A bit related is when you work in big companies like Apple and Tesla. These guys have a "hero" at the top. There is nothing you can do but wait for that headline that talks about a feature you made and it was Elon Musk's doing or Job's amazing leadership. I have nothing against these two but it is very demotivating to work.
Compare that to small studios, where you can really feel like part of a family. It's very different, and all these kinds of feelings are more intense than other IT companies I've worked at. (Probably partly because of the extra time you tend to spend there when working in the games industry...)
Having said that -- some of my best friends were made when working at the big AAA studio! So it's not all bad.
Sidenote: before he said that the small projects were cancelled, I assumed that they were Evolve (https://evolvegame.com/agegate/) (I don't follow games close enough to know which studio makes which game).
I'm curious as to how he was able to, I assume, bootstrap a game company for a year before releasing an iOS game.
All software written at this stage is small cogs on a much bigger platform written by teams of brilliant people over the last 30-40 years.
I do think it's fair to say you want to work on actual interesting problems and being one of 20-40 people working on a game engine is probably very tedious. I imagine long code-review cycles since any tiny change could destabilize the entire system several layers up.
Some people need big organization structure to produce their best work while some people need the freedom to have infinite WFH days answering to users to produce their own best.
Wow. IMO A dream job is a balance between having fun like you described and working on complex problems. I love how you have written this paragraph.
. Build your own company and you will finish accepting profit as flagship.
. Find a job where you lead the direction and internal politics will make you adapt to a whole way against their life goals.
. Make an open source project that no one will use.
I often had dreams of doing the same thing, especially inspired by this guy http://www.konjak.org/ .
It seemed like overkill for me as I could never get a team together.
Though with rise of VR, I've been looking into unity3d.How cool would it be to build your own world, then jump in and visit it.
I sort of suspect not. I am currently refactoring an (albeit important) part of the LibreOffice codebase - the VCL font subsystem. Mostly it's reading the code (in fact, 90% is reading and understanding the code), but it's kind of satisfying looking at how changes to the code make things better and... more elegant.
Perhaps this is just an Open Source thing. Or maybe I'm unusual in that I like to focus on smaller modules and make them really good, then move on to the next thing.
Best of luck :-)
When I was in University I didn't understand why some people didn't care about grades and partied so much. When we left school and got into the real world I understood why: they had rich parents with contacts that could get them good jobs or seed capital for their own businesses.
I had lots of ideas and worked in a lot of startups for more than 10 years but now the following phrase from the article describes my situation very well:
"Most of the time, potential founders who share my background tend to work at lucrative jobs in finance or tech until they can take care of everyone in their families before they even dream about taking more risksif they ever get there."
It does make it really hard to change your mindset when you come from this sort of background, when you've achieved more than anyone in your family and therefore can't really talk to them about your ambitions or career objectives.
It sounds awful, but sometimes I wish I had been born into a different family, with highly educated parents I could have amazing conversations with, who would encourage me to achieve and grow even more.
I find I constantly have a mindset of "I'm not good enough" and it's paralysing. I want to interview for the top tech jobs out there, like Google or Facebook, but my brain keeps telling me I'm not good enough, it's awful.
My family basically fell apart when I was around 11. My parents divorced. I stayed with my father, siblings went with my mother. My father turned into a drunk. I spent good nights carrying him from the couch to bed, and bad nights carrying him from the lawn, sometimes without clothes. I learned to drive bringing drunks home when I was about 13.
I had no social skills. I struggled in school and failed a grade, though I eventually made it up and graduated high school on time. No one ever even mentioned college to me. I never thought about it until everyone I knew was talking about where they were going. Toward the end of high school my father's alcohol habit turned into a hard drug addiction. About a week after my 18th birthday, we were kicked out of our house because he hadn't paid rent in months. He went to go live with a fellow addict and I became homeless.
I lived on friends couches for a while. Around that time I realized that life could continue getting worse, or I could start fighting the tide. I got a job making pizzas, then doing construction work and then started a sub-contracting company doing construction when I was 19. When I was 22 I had 14 people working for me. I ended up shutting the business down, mostly due to mistakes I had made. After that, I got into tech.
I'm 30 now. I've got a family and don't have much of a relationship with my parents or siblings. I make a solid salary, and have done fairly well in my career, but I struggle with pretty severe imposter syndrome. I have trouble making lasting connections, and have failed entirely to find any mentorship. My wife hardly knows anything about my history, but she knows more than any of my friends.
All of this is a long winded setup to say, I didn't get that transformational experience that the writer here experienced at university. I didn't even know SAT classes existed until well after they would have helped, and had never heard of Stanford until I was into my tech career. I would have given quite a bit to trade my father for an immigrant who simply didn't work. I very much admire the writer's drive and results, and don't mean to detract from any of that, but I have a hard time fighting the urge to point out that he had more privileges than he probably realizes.
First, let me say that I am happy where I ended up. I'm successful, enjoy my work, and when I compare my personal income with our family income when I was growing up, it is an absurd multiple.
We were a very poor family in a poor part of the South. I went to a top-10 small private university on a full ride, felt completely alienated and never quite figured out how to function in that environment. I dropped out and moved to San Francisco at what turned out to be a very good time (early 90's), and once Netscape dropped, discovered nobody else knew what they were doing with this web thing either, and more or less faked it until I made it.
At the same time, I have had and do have ideas that others have executed on, that I know I could have made a go at, if only
The if only list is long, and most of it comes back to self-imposed limitations that I can trace back to how I grew up. Frequently it relates to economic security, but there are other habits of thought that stop me from even getting to worrying about that.
One big one is that I never learned to think about entrepreneurship. A big lesson hammered into me growing up was the importance of "finding a good job", not figuring out how to make my own.
I did start a company in my mid-30s, and we did OK, until we didn't. And that failure (I think) had nothing to do with the habits of thought of a poor kid. But failing in a similar way in my 20s would have left me in a position to learn from that and try again, something I'm unlikely to make a go at 10 years later. I do little things for side income, but those are hobbies.
So it ends up being this thing that doesn't really bother me at this point, but does leave me to wonder what would have happened if I had picked parents from a very different walk of life.
And I am quietly amused when people tell me how they built everything themselves "after a seed from Dad", or "with a great connection I made through a family friend" or similar. Those are impossible blockers for a lot of people, even if they get over some of the habits of mind better than I did.
This is what's behind the achievement gap anxiety: Wise rich people don't want to perpetuate a world where only money selects success. It's wasteful and ultimately unsustainable.
The "poor" kids also tend to find each other at college and over the first few semesters form separate networks from the rich kids. People tend to want to hang out with people who are similar to them. One group goes out partying together, the other sits in a dorm room listening to music and drinking a $15 handle. Their friend groups don't overlap over much.
The poor kids tend to build networks where the members personal skills and resources they bring to the table in the present are important (or that was my observation). I guess when you can't throw money at a problem knowing who's the IT guy and the car guy becomes more important.
I got turned down by 15+ companies and startups in the past few weeks because they couldn't sponsor my work visa. This is Canada.
The USA? Being a dropout makes me ineligible for any US work visa.
So much for merit.
How do you identify those who are underprivileged, but carry that quality too? It can be very difficult to identify.
This post had some of that individualistic attitude to a much broader and obviously systemic problem.
The OP does not say that his parents didn't show him any love, which is more important for the development of a person than any economic status. Many of the other struggles might be used as fuel for building positive character traits, unless one doesn't let it.
Having read through the post, it doesn't appear that he's actually arrived at a valid point, and is just trying to brand himself as being underprivileged through the telling of his life story, which has turned out to be successful by most standards. He uses the argument that "mindset inequality" gave him a chip on his shoulder so he was able to succeed, and therefore others fail because of it, which seems contradictory.
Here is one instance where that powerful way of thinking runs head on into a stone wall. He said "few successful founders grew up desperately poor" and moved on. Succinct. yes, but not powerful. This piece took a couple thousand words to say the same one succinct thing that PG said, and nails it in terms of the empathy it generates and the power with which it communicates. While PGs writing in this issue comes out as aspie. This is the lesson he needs to take from that latest article and the Internet's reaction, and not that Life is short and totally miss the point.
Narrativity and Authenticity and Poetry and Verbosity is power! (when dealing with humans).
This is not the only post recently where the topic is "oh golly gee, look at the hardship I went through to get through college and the found something."
It's a millennial post, and there have been many of them.
Going through college is a challenge ... Having to work or be responsible during such sucks (I interned at Borland as well as worked for an astronomical research company).
Post college, more than a few have to deal with life obligations that come up.
Our profession certainly offers a bit of a cushion and flexibility, but we have to manage that and our obligations.
I don't see someone here whining about having to support their parents due to the last downturn or the many other personal decisions made.
The blog would have been better written as challenges met and overcome and left out the for lack of tact whiny bits...
Yes, coming from poverty has challenges, and friends in such stretched into their late 20s to complete a degree...but perspective and awareness of the wider world is needed... Not another post about personal insecurities..
Most people don't have the money or resources to build a company like this, which is why we have VC. They know you are in a desperate situation and exchange the money that you need for a % of the company.
The better thing to do is choose a solid business idea that can be built slowly and at a certain point, put money you make from this venture into an idea that needs more capital to succeed.
> Compare that level of confidence to a kid with successful parents whod say something along the lines of If you can believe it, you can achieve it! Now imagine walking into a VC office having to compete with that kid. Hes so convinced that hes going to change the world, and thats going to show in his pitch.
I enjoyed this article a lot but clearly this guy also made some of his own hardships. Going on ski trips just to fit in and then running out of money is incompatible with the image of a frugal poor kid.
The essay by PG actually meant that there are no poor founders at all. It would be interesting to have statistics on whether poor founders fail more, or don't even get a chance to try at all. I have reasons to believe that the rare poor person is more motivated and determined than the average groomed-to-be middle class entrepreneur, and there are plenty of cases of dirt-poor persons becoming millionaires.
I had lived in China for more than 5 years, Boston, Japan or Korea for more than 9 months each.
In my opinion, minimizing conflict has nothing to do with being poor, and a lot with being Chinese educated.
On the contrary, I volunteer helping poor kids like Spanish gypsies or Subsaharan African and they(and their parents) are ultra confident, and spontaneous. Being open is the default thing for them.
I managed Chinese people in China and there was a world of difference between natives and those Chinese educated overseas.
When living on the US, I was shocked to see parents cheering their kids for the most stupid thing, when in Europe as a kid you are forced to do 4x more effort without rewards at all(like learning multiple languages). It is just what it is expected from you.
In Asia, this pressure over kids is even higher than in Europe.
Family is very important for Chinese almost a religion.This has advantages and disadvantages. For innovation, it is a big disadvantage. Innovation means taking risks, being close to your family means having to convince lots of people those risk are worth it. Most people won't understand you and it is very hard.
In the US, everybody is on their own, basically, nobody gives a dam, which is great for changing the world.
Excellent article by the writer. Apologies for the long post, however hope it is helpful for someone in similar situation. I can relate to many things that he has faced and feel incredibly lucky to not have faced some things that he had to.
I grew up in a small town in a poor family in India as eldest of four siblings. Our monthly budget was 20 dollars and things were really tight. However my dad worked really hard 16 hours every day and made sure that my studies do not get hindered. He told me every single day that with hard work I can achieve anything that I can dream of.
I got into IIT Bombay (one of the most prestigious colleges in India). However it was obvious to me, that I need to get a decent paying job right after school to support siblings and my dad who couldn't do 16 hours any more.
It took my the next 8 years working for others, to save enough to pay for the studies and marriage of me and my siblings and to help my dad retire.
During these 8 years, I built and ran the biggest social network to come of of India. Apart from this also built something which is now the Twilio of India. I was also the part of the team which built the current mobile offering at LinkedIn.
If I had financial stability, I would have started working on mine own ventures 3 years into my career. But it took 5 more years. As soon as I had financial stability, I quit LinkedIn (with 2.5 years of stock unvested) to start a company.
I started a company, where we had incredible opportunities. We built something like Slack for consumers around the same time as Slack. However, being on H1 visa, I was a minority stake holder in the company. And it is a bad situation to be in, if your traction is not already proven. It made sense to exit the company, so we sold it to Dropbox in an acqui-hire.
Dropbox treated me really well. I met some of the smartest people I have ever met over there and it can be a great place to work for many people. However, I soon realized that it wasn't a good fit for me. Such companies are very top driven, there is little creative freedom, and most of the work is cleaning up the tangled code developed over 7-8 years. So I quit Dropbox after an year.
Now I am in a job that gives me more creative freedom and I am pretty happy on that front. Meanwhile, I have been sole advisor for a few companies over the past 2-3 years and they are all profitable and didn't need to raise any money. The entrepreneur in me, keeps me raring to go and start another company. However, because I am on H1 visa, I do not want build another company with minority stake at formation (USCIS rules). To fix this, I would need to get a Green Card. However if you are from India, it will take you 8-10 years to get a Green Card in EB-2.
So the next steps are either move from US, or find a way to get a Green Card on EB-1. If anyone knows any good immigration lawyers, please help introduce.
However, related to the original post. In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.
It's certainly not the first time I've thought about this topic, but for whatever reason, the OP and much of the discussion is resonating very deeply for me (and apparently for a lot of folks). IMHO, this is some of the most productive discussion about privilege and opportunity that's ever appeared on the internet; for the most part, this discussion has avoided the sort of aggravated competition (i.e. pissing contests) and judgements that generally arise out of internet discussions of privilege. In place of those nastier (albeit very human) responses, this thread is full of empathy, support, and offers of help.
I'm very proud of our little community here today.
I'm planning on writing a more detailed post in a few days after collecting my thoughts a bit more, but I'd like to share some half-formed ideas which this post has inspired (comments and criticisms are very welcome!):
1) Part of what's awesome about this discussion is that it seems to have enabled a bit of ad-hoc group therapy. I think it's very helpful for folks who are facing these hurdles to realize they are not alone; while everyone's situation is unique, it's great that people have been acknowledging similarities in their stories, rather than arguing about the differences. We should try to do more of this (with other contentious topics as well)!
2) As several people have suggested, I believe that collecting these stories could potentially help a lot of people. I'm totally down to build and host a site towards that end - would anyone be interested in sharing their stories in that sort of venue?
3) While the specific issues that people have had to deal with are different, there seems to be some common 'flavors' that many have experienced: a) Socio-economic disparity causing an aversion to risk later in life b) Lack of confidence in oneself which adds an additional handicap compared to more self-confident people, likely resulting in missed opportunities (you can't win if you don't play vs you can't lose if you don't play); impostor syndrome. c) Lack of connections, again likely resulting in missed opportunities and increased difficulty in building new things/finding a job/etc. d) Disparity in access to knowledge that greatly improves chances of success (e.g. importance of SAT scores to college admissions; efficient resource management; interview skills)
Improving the situation in (a) seems to be what the world at large is most interested in. Unfortunately, it's a difficult, heavily politicized, and therefore divisive issue. By contrast (b), (c), and (d) seem like problems that we could really improve, at least within our own community.
For example, someone might have a harder time getting the type of (tech) job that they want due to a lack of personal connections (it can be really hard to get your foot in the door), however, it's likely that the personal connections they need are actually visiting this site every day. While we obviously can't just start providing references for total strangers, how much effort would it be to spend a few hours corresponding with someone and vetting their skills to see if you feel comfortable in recommending them? (I'll put my money where my mouth is on this one - if anyone feels like they'd be a good fit at Cloudera, let's talk! EDIT: just to be clear, I don't really have any hiring authority, but I'm happy to talk to anyone, and potentially help with a recommendation)
Likewise, it seems that (b) could be improved for a lot of people with simple communication - impostor syndrome is very common in tech, so I assume that a lot of people here have advice on the subject, or just an empathetic/sympathetic ear.
Regarding (d), this type of information is all likely available already on the internet, but perhaps it could be more usefully compiled for this particular case, minimizing the number of unknown unknowns? What about a thread (like "Who's Hiring") listing offers for mentorship ("Who Needs a Mentor?") ?
I dunno, am I just being overly optimistic here? It seems to me there's a lot of low-hanging fruit here, if some of us are willing to dedicate a bit of time to it.
More ideas? Criticisms?
I was born in Albania, a small, poor, European country with a GDP comparable to Zimbabwe, Namibia, or Sudan. That same year marked the fall of it's isolated strain of communism, and Albania's borders were opened for the first time since WW2. In the late 90s, after the collapse of its economy and ponzi schemes, social unrest reached its height following the violent murder of peaceful protesters by the government and police. This sparked an uprising and the government was toppled. The police and national guard deserted, leaving armories open, then looted by militia, and criminal gangs, with factions fighting in the streets to take control. My parents moved our beds to the hallway of our small apartment as there were no windows, and my little sister and I had to stay quiet so no one would hear we were there. After a UN operation, the government was restored, and the situation was relatively calm. Sometime that following summer, my dad found out about a US green card lottery, filled out an application form, and because he was in a hurry, handed it to a random stranger waiting in line to submit it for him. He then forgot about it, until a year later, when we got a letter telling us that we had won. My parent's weren't terribly off in Albania, they were comfortable, their friends, families were there, they had great jobs, and the future looked promising. But having just gone through that rebellion, then the Yugoslav Wars to the north trickling across the border, and the allure of the American Dream, they decided it would be best for my sister and I.
We moved to Philadelphia in 2000, in a working class neighborhood, with a few suitcases and not one word of English. My parents took on multiple jobs, their Albanian communist degrees were obviously not recognized in the US, so my dad, once a doctor, is still working maintenance, and shoveling snow in the East Coast, as I write this. Like Ricky said, and like all immigrant kids, my family depended on me to learn english and deal with translation, and everything in between. 5 years later when we became citizens, and received our passports, my parents knew more about American History than was taught in my inner-city high school.
My parents are incredibly supportive, but they moved to the US in their 40s, they werent familiar with the language, culture, and even more importantly capitalism. Apart from the classic model of education, they werent familiar with the tools required to be successful in such a strange place like this. But with their meager wages they were happy to support my hobbies, buy me lots of books, and a computer with internet access which taught me much more than my inner city schools.
Eventually I got a college degree, then went on to do a dual-masters in design and engineering at the Royal College of Art and Imperial College in London. I even got to go to Tokyo and work for Sony, while studying there. I graduated this past summer, and then launched my final group project as a startup in London with my friends, two English, Oxford educated engineers, and a Spanish designer/engineer whos father is the president of one of the largest companies in the world.
Then reality sank in, I had to leave, I cant be an entrepreneur just yet, and I moved to SV to find a high paying job in tech for the next 5-10 years, so that I can:a. afford to pay rentb. pay off my educational loansc. pay off my parents homed. help my sister pay for her educatione. send some money home because my dad is getting too old to shovel snow
I've become allergic to words like "privilege" as they usually are seen in the company of ill-thought-out and grandiose/insulting/wrong proclamations about How Things Should Be Done,
..but this is none of that - it's an honest look and deep analysis of someone's experience.
And knowing how important upbringing is, and the sheer (almost superhuman) tenacity the author had to go through to even partially overcome the (poisonous? non-optimum?) mindset that was completely a result of things out of their control...
what the heck is everyone else supposed to do? How does society do right by people like this? Overall, we're pretty horrible at dealing with things that are as subtle as mindset.
The shock came from seeing how I lacked culture/experience/skills/confidence others had. And these others had grown up in more stable environments with either some or quite a bit of money.
I didn't know how to play any instrument. I wouldn't say everyone I knew in college played an instrument since I wasn't at Standford :) but still it was obvious to me I LACKED the soft skills my peers had.
I had not done many things as a teenager that are possible only when you grow up in a family with some means. And this weakened already not so robust confidence in myself, resulting in a mostly downward spiral as far as confidence in myself.
You see growing up with money buys you a lot of soft skill that helps you later.
I'm not bitter though. It is what it is. I try to be thankful for what I've had so far.
It may not be something a startup can solve but "administrivia as a service" - some means of connecting families who need with someone able to actually advise and not be taken advantage of
In the uk we have a volunteer service called citizens advice bureau - I am thinking something like this on tap might be beneficial in ways hard to quantify
?
I lived in a society where everyone was almost the same, similar economic status, similar privilege etc. etc. Life sucked. I decided to move out to be among the top 10% instead of one of the 100%. I eventually ended up in SV.
This place is awesome and the very reason I am here is because I can be in the top 10%. I dont want to be equal but I seek privileged, extra-ordinary wealth and stuff that most others can not afford. I think it is an amazing thing that places like SV exist. If you somehow take out that incentive I think I will move somewhere else. Of course I would be moving out of California sooner or later given the taxes.
[0] - Apart from the safe suburban upper class childhood, the prep school and Harvard education my parents paid for, the job at Goldman Sachs my uncle got me straight out of school, and the finance network from that experience that eventually helped me with my first funding rounds, but yea, besides all that I'm TOTALLY SELF MADE!
I liked to that you went to a community college. I too screwed up in high school. I didn't even know why people were taking another test--the SAT. That said, I cleaned up my act in my senior year, but it was too late.
Everything, and a lot more, that I missed in high school, I made up for in two semesters at community college.
If anyone in high school is reading this, and thinking, "I wish I could do it over?". You can! I had a great time at my community college. I saved a lot of money, and met some really wonderful people. The teachers really seemed to care. I didn't find that at the four year school, or even my professional school.
Just make sure to transfer, and get that four year degree. So many people don't transfer to a four year university, or even get the associate degree. Yes, so much of college is absolute bull shit, but degrees are still valued in a lot of professions. It's changing though, and I couldn't be happier. British companies are taking the lead. I know at Penguin books; HR isn't even allowed to know if you went to college, or not. You are hired on your experience, and maybe a test? The way it should be.
The author is not arguing that you literally cannot compete if you're poor. But it's the very mindset from growing up in poverty that, through almost every interaction you have in childhood, leads you to _believe_ that you cannot compete, which prevents you from even trying. And even if you overcome that feeling (through constant hard work and willpower, such as our author's), say you do try to compete with the rich kids, then your lack of inborn confidence is so obviously apparent that you come off as inexperienced, or insincere. This is perfectly accurate in my own experience.
Mindset inequality is actually an incredible way to describe it.
This breaks the HN guidelines: it is both a personal attack (since you're talking about the OP) and gratuitously negative. Please do not post comments like this here.
Room for a startup/free service that does that!
One of the developers remarked that while he was proud the system he worked on could deliver such uptimes, having an uptime of, say, three years, on a server, also meant that a) its hardware was kind of dated and b) it had not received kernel updates (and probably no other updates, either) for as long. (Which might be okay, if your system is well tucked away behind a good firewall, but is kind of insane if it is directly reachable from the Internet.)
Still, that is really impressive.
People here forging ahead with innovative hardware, why not just record brief details of dates and setups in the back of a diary or something. In 30 years time, you'll be able to start threads like this!
http://www.digitalprognosis.com/pics/bye-bye-uptime.png
I was sad that we had to shut it down, but we had to shut it down due to migrating our primary colo to another city and were going to retire all of the hardware. I'd been manually backporting bind fixes, building my own version, and had to do some config tweaks when Dan Kaminski released his DNS vulns to the world.
It is always a sad day to retire an old server like that, but 18 years... What a winner!
Edit:
But 1158 days for an old dell 1750 running RHEL4 isn't too bad considering it serviced all kinds of external dns requests for the firm. Its secondary didn't have the uptime due to constant power issues in the backup datacenter and incompetent people managing the UPS.
Considering that the datacenter it was in is now the Dropbox office, I'm guessing it had to be shut down and moved at some point, but 2+ years seemed like a really long time even then!
FreeBSD is just really good at lasting forever.
In 2002 I had installed on the machines under my guard some program that reported uptime to some website. One of my machines, an SGI Indy workstation, had a high uptime, about 2 years. Then a new intern came, and we installed him next to the Indy. Unfortunately, his feet under the desk pulled some cables and unplugged the Indy and broke my hopes of records :)
A lab was decommisioning an instrument controller that had been running non-stop since they had first spun it up, fresh out of tge paking box, a decade previous.
And they had never backed up any of the data. Sure, the solution was the pretty straight forward use of a stack of floppies. It was still pretty nerve-wracking having a bunch of high-powered research scientists watching over my shoulder, "making sure" I got all their research data off the machine they were too smart to ever back up themselves. Good Times.
When it was in the basement of my home/office, I would sometimes hear it's disks wine as I was working out (lifting weights and such). It was even in my basement through parties in my early bootstrap years.
I originally bought it to run WinNT 4.0 for a new company a friend of mine and I bootstrapped. I would guess a couple years later is when I put Slackware on it. It's running a 2.0 linux kernel. It's not exposed to the public Internet.
It use to be a local Samba, DHCP, and DNS server for the company. I eventually upgraded to new hardware and left this server around for redundant backups. I develop software so copies of my git repositories find their way onto this box each night. It is in no way relied upon other than to call upon it out of convenience if another server is down or being upgraded, etc...
At one point the box was in the basement of my home when a small amount of water got to the basement floor and because the box sat just high enough on rubber feet, no damage. Occasionally I go back there and pull the cob webs off it.
There is no SSL on it. We still telnet into it or access the SMB shares for nostalgia. It's sort of a joke in the office these days to see how long it will last or if it will simply out last us.
C11 (and C++11) defined a memory model and atomic operations for shared-state lock-free concurrency. But that model and the atomic operations aren't being used by Linux, because they didn't match up with the semantics of the operations that Linux uses. (See https://lwn.net/Articles/586838/ and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p012...).
I'm curious what Rust says about this. Does Rust have a memory model like C11/C++11? I'm curious whether Rust (and C11/C++11 for that matter) will evolve to have primitives like what the Linux kernel currently defines and uses.
As the HN crowd seems to have quite a lot of Rust supporters, would it be a good selling point in a job description ?
i.e if (it's just currently a personal hypothesis) a company were to consider to (re)write some part of its Rest-ish microservices and that Rust was the chosen language, and was looking for people to help on that, would it make a `interesting++` in your mind ?for real services used by real people by a not-so-startup company, in europe.
edit: I already deployed in production for my previous company some (with a very stricly limited scope) microservices in rust (with everyone approval) and it was quite a success, so I'm more and more thinking that rust is now enough developed to fit the market of language for microservices, as it's more or less "understand HTTP, read/write from redis/postgresql/mysql/memcache, do some transformation in between" and Rust now support these operations quite well
You can follow along, all slides and HW are on github.
For those wondering, stable just meant the API defined for interactions with some libraries were subject to change. It wasn't like a problem with using the APIs, it was just the developer has to know that new releases might change how they worked or if they would even be available in the future.
I have several crates providing access to GPIO/SPI/I2C under Linux and would like to put together a roadmap for having the interface to these types of devices that is portable to various platforms and devices (e.g. Linux as well as Zinc, ...).
The native thread/connection model quickly shows its limits, and is very slow on virtualized environments.
mio is as low-level as libevent and just as painful to use.
mioco and coio are very nice but blow the stack after 50k coroutines.
So, it allows you to have safety and control.I thought that is very neat.
I've got three questions for experts.One, what type of applications is Rust intended for?
Two, I like JS because I can code in the client, and server in one language. Will there ever be a web server framework for rust and an api that allows me to modify the dom?
Three, what are your predictions for the future of rust?
But for web services? I think it is overkill. I think Go strikes a nice balance. I would love to be convinced otherwise though. So please tell me, what am I missing?
Fun hack! I feel like there should be some clever practical applications but I'm drawing a blank.
But the rub is always the propensity for users to forward on those same emails. If they do, then the second recipient gets control of the first recipient's account, and that's rarely the intention of the first recipient/forwarder.
I haven't had a chance to dive in enough, but I wonder if a technique like this could effectively swap tokenized links with generic links (even if you're just swapping 'display' rules) when a message is forwarded. You might have to use different message style/markup output depending on which service you're sending the message to, but my read of this article is that it's not a ridiculous thought.
Make a link per identifiable client, show only the one for the current client, and give each link a post/get parameter identifying the client. Quite easy to do, but a lot of work to have broad client support.
Tada! I now know you read your email on your [obscure and bugged client], which is susceptible to [this and that exploit].
My company moved ~5 blocks and it really screwed up the map on my phone (which I use to get around the city) for several months. My company had left the network SSID the same in the old location, so that no one had to re-configure their wifi. Even with GPS on, my phone was always convinced it was in the old location up the block, and this would persist even when I was out on the street, until I walked around a bit.
There are companies (presumably Skyhook is one of them) who drive around mapping SSIDs to physical locations. The problem is that SSIDs can move or be duplicated elsewhere.
The article says of one of the couple "at one point he reset their router, and changed the frequency at which it broadcasts; it didnt solve the problem." It does not say if he changed the SSID.
Theoretically, location is often determined using not just one but several nearby SSIDs, a sort of triangulation. Another possibility here is that there are multiple nearby SSIDs around this home that match the SSIDs surrounding some other area tied to the victims.
Logged onto "Find my iPhone" app and it told me it was about 1km away from my house. I thought I must've dropped it somewhere nearby.
So I got the address from Apple Map, drove there and knocked on the door to greet a rather defensive (obviously) lady who, of course, denied ever picking up an iphone that day.
I snooped around to see if there were any suspicious people around, maybe she has a wayward son who goes around and steals other people's phones.
I then went to the police office nearby and asked them what I could do. They told me they can't use the GPS tracking as an evidence for a search warrant - doh!
It was frustrating because the app was telling me that my phone was right there! At the back of this lady's house!
At this moment, I was going through all sorts of thoughts - such as "should I break into her house at night?", "should I go back and just barge into her house and locate my phone and shout 'AH HA! I KNEW IT! YOU THIEF"!
Feeling dejected, I came home, only to find my phone sitting on the top of my drawer.
2 seconds ago, I swear I thought she was the thief.
Apple - you disappointed me.
> In June, the police came looking for a teenage girl whose parents reported her missing. The police made Lee and Saba sit outside for more than an hour while the police decided whether they should get a warrant to search the house for the girls phone, and presumably, the girl. When Saba asked if he could go back inside to use the bathroom, the police wouldnt let him.>> Your house is a crime scene and you two are persons of interest, the officer said, according to Saba."
The police shouldn't be able to detain someone for over an hour without probably cause and without arresting them.
I suppose it's fortuitous that nobody lives at 0,0...
There's a claim that it's been done before [1]. Maybe a criminal organization has the resources to build a GPS spoofing device that's used in their "holding facility" before they root the phone?
Can anyone with RF or GPS experience guess the difficulty?
[1] https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incid...
-The name of the wireless in a database and picking up the first one, could not be fixable changing the router name or ip address as it is already recorded somewhere -Same for other routers or ip address in the neighbor -Maybe it's not their fault but someone else did it on purpose, e.g. take the cell phone and manipulating it inside a room with stolen/faked/forged data somewhere else and wrapped in a metallic sphere to block signals so only the forged router can be used? -Even more crazier: put a router really close and put there a vpn/proxy?
I am not implying anything about these people, but I am just saying it isn't impossible.
I lived in Las Vegas for a couple of years and was involved with some people who, from the outside, seemed like very normal folk...in fact, in many ways, I was someone like that, too, due to issues I was fighting at the time.
We all have a different set of experiences in our lives, and, unfortunately for me I suppose, my experiences make me think about this in a different way then many here might.
Based on this line from the article, I'm almost certain that this is the real answer:
> It started the first month that Christina Lee and Michael Saba started living together.
http://www.cbc.ca/news/canada/toronto/jeremy-cook-18-killed-...
Of course the reality is that key Google and Apple staff know exactly what has caused this and don't have a ready solution so are keeping quiet.
In any case, if there are Google or Apple employees reading, perhaps you can suggest this idea to someone internally in the chance there may be some progress before someone innocent gets killed for 'stealing' a phone.
This is a problem caused by incorrect data representation. Everything the companies know about the location of the phones is an imprecise area, yet they are representing it with this: [1]
This absurdly precise representation doesn't convey the error margins of the information available, and it's what's convincing people that this couple's home is the point that they're looking for. The mapping companies are misleading their users by hiding the level of confidence about the information provided.
Please all front-end developers, don't use a map pin to represent a place in the map if you don't know their coordinates or exact address. A circle with a radius proportional to the uncertainty area is the best representation in that case.
[1] https://www.google.com/search?q=icon+map+pin&biw=790&bih=750...
Usually it's pretty accurate. But several times a year there are major anomalies For example showing me traveling the 4 miles to work in London via a quick journey to Oslo.
It's pretty reliable but not 100%
My explanation as to what is likely happening:"Every WiFi router has a special unique number - think of it as serial number - baked into the device by the manufacturer (known as a MAC address). Manufacturers request a range of these unique numbers from the IEEE, and are never meant to duplicate them. When you connect to a WiFi router, you connect to its friendly name (SSID), but also your phone receives a part of this special unique number (BSSID address) [1].
Companies like Apple, Google, and SkyHook, record the location of WiFi routers using this unique number. When a phone or other device has a strong GPS location and a strong WiFi signal, they can fairly reliably assume that this unique number is at this specific location.
However, not all manufacturers strictly follow the unique number allocation rule, as getting allocations can be a time consuming process. 999 times out of 1000, reuse of these numbers is not a big issue, and goes undetected. In this case, it is likely that the thieves are using, or are located near, a WiFi router with the same unique number as this couple. Changing this special unique number is sometimes possible on expensive enterprise grade WiFi routers by knowledgeable experts, but not possible or advisable on home routers. The couple should change their WiFi router."
Yes, I have conflated a number of terms there for simplicity. For technical accuracy: WiFi router -> access point.
Edit: added [1] https://arubanetworkskb.secure.force.com/pkb/articles/FAQ/Ho...
As BSSID is != MAC address.
Does anyone know how that works?
Here is a way to reproduce this issue and explain my point: if you are a thief, you could setup a GPS spoofer pointing to that house or have had your router in that house in the past so that some phones registered/verified it's MAC address to be at the house location. Now assume the thieves live in a location where they took this router with them and where there is no GPS signal or other router or cell signal, but only the thieves' router turned on. Now as soon as the thieves connect the stolen phones to their router, they will report being at the house.
My bet is that this is likely an intentional attack by the thieves and that they are aware of what they are doing. There is a small chance they could have been people living in the house before or drove by to setup their spoof as it would have been much easier than getting their hands on a GPS spoofer.
It may just all be coincidence, but that flight tracking feature is so wonky and jacked, giving false locations, legs, flights, and information on the regular. I am surprised it hasn't caused a massive outcry for just how horrible it is. It kind of makes me wonder whether there is some shared service or database or something because the flight lookup feature just smells of the same kind of failure.
I realize, most people don't know/recall that iMessage will auto-link flight numbers. Just message the full flight number.
You apparently can just put a phone in one of these ATM-like machines and get money out, which immediately struck me as a clever way to buy stolen phones on the cheap from criminals, with indemnity... which would definitely lead to situations like this when those stolen phones are resold to unsuspecting consumers.
It was mentioned the "SSID"/MAC address problem. It's possible that they have a home router with its default SSID and are encountering a MAC address collision (assuming MAC address is always taken into account, which I'm not sure that it is). Their router is likely part of some database that the GPS uses when the phones enter an area with WiFi but no cellular service or line of sight to the satellites. I had a similar failure every time I went indoors to an archery facility I visited weekly for three months. Both my wife's and my phone would think we were a clear 30 miles away in another city the second we got far enough into the building to lose cellular service. I dug into it and discovered it was using WiFi APs to get location. I think the archery place has another location in that other spot, so it's possible they swapped WiFi gear at some point, but it's anyone's guess.
Another possibility, hinted at in the article, is that there's no other location data available to the stolen phone (no mapped WiFi, no cellular service) but it has an IP address so the devices are falling back to Geo IP which is extremely inaccurate (my IP address changed recently and I am now a Canadian according to location services on my PCs with no GPS capabilities -- 200 miles off). It could be a circumstance of "that IP isn't known, but that block is owned by x ISP and here's a general location of where that is ... only the little dot happens to land on their house.
It would be really smart for apps that track location for theft purposes to keep a reasonable history. If it's a mobile phone, the last known high-accuracy reading from the GPS should be presented along with lower accuracy results to help in situations like this. I'd imagine it wouldn't be terribly difficult to correlate several readings over a period of time and discard ones that are clearly not sane (as would have been the case with my phone in the archery place). A bonus would be to perform other actions when the device is marked "stolen", like take photos at certain intervals and upload them to the cloud to make it easier to "prove" your phone is in the hands of someone it shouldn't be (one of the tools I had did something like this).
Wait, what? Is my router broadcasting a location to someone? What technology is this, and how do I make sure my router isn't doing it?
This is a good premise to an "off-by-one" parable where it turns out the neighbors are phone thieves.
Anyway, we use as much ES6 as Node 4 allows at work. Transpiling on the server never made much sense to me. I also used to sprinkle the fat-arrow syntax everywhere just because it looked nicer than anonymous functions, until I realized it prevented V8 from doing optimization, so I went back to function until that's sorted out (I don't like writing code that refers to `this` and never require binding, so while the syntax of => is concise, it is rarely used as a Function.bind replacement). Pretty much went through the same experience with template strings. Generator functions are great.
I'm not a fan of the class keyword either, but to each their own. I think it obscures understanding of modules and prototypes just so that ex-Class-based OOP programmers can feel comfortable in JS, and I fear the quagmire of excessive inheritance and class extension that will follow with their code.
No, let is hoisted to the top of the enclosing scope [1] ("temporal dead zone" notwithstanding). let, however, is not hoisted to the top of the enclosing function.
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
I always imagine cheatsheets to be just that; something I can render on one sheet of paper. Printing the entire raw README text would take 4 pages (2 sheets, front and back).
I think it would be better titled, "ES6 best practices" since I think that's a more accurate description of what it is.
One minor quibble. I was bothered by the misuse of the words "lexical" and "interpolate". The lexical value of the keyword "this " is the string "this". Then, you might translate between two technologies such as CommonJS and ES6 but interpolating between them implies filling in missing data by averaging known values. Granted this word is commonly abused. Sorry this is a bit pedantic but these corrections would improve the document, IMO.
Part of me feels that obscuring Javascript's roots in this respect is very un-Javascript-y. What think ye?
Coming from Ruby, loving template literals, feel right at home with them, I wish even C could have them (if that makes any sense!).
But most of the time this is in the context of Node.js development, and in every case I use Babel.js to turn the end result into ES5 code.
I'm perfectly comfortable with using ES5, because as a freelancer/contractor I often have to do so. But I really miss the ES6 stuff and the more I use it, the more time it takes me to 'switch' to a mindset where I'm only allowed to use ES5 functionality.
Nonetheless, it strikes me as really odd to actively prefer ES5. Having worked with Ruby and Python (among others), ES5 feels limiting for no good reason. The only rationale I can think of for prefering ES5 is nostalgia.
Could you elaborate why you don't like the 'perl/python' style changes? Because I truly do not understand why one would choose to limit oneself to things like .bind(this) instead of the different forms of arrow functions that make functional-like programming so much easier. And I've found that the best part of JS is that it's decently functional.
Edit: I would agree when it comes to the new 'class' keyword though. I'm not a fan of that.
http://aws.amazon.com/certificate-manager/pricing/
https://docs.aws.amazon.com/acm/latest/userguide/acm-certifi...
Now, 5 days later, AWS lets me create one for free in 3 minutes, with zero hassles. I cannot select it in beanstalk yet, but I am sure that will come. I am consistently amazed by how frequently AWS satisfies needs I barely knew (or didn't know) I had.
The only thing I can think of is that AWS Certificate Manager only validates by email addresses which can be problematic if you don't have MX records or don't have control over it(Maybe a large organization where the people who do control those email addresses won't click simple verification links)
It seems a bit inconsistent as to when it will use the email on the whois record for the validation too. For some subdomains I try it will allow validation using the whois address, other times it's just the common aliases@sub.domain.com(which requires an mx record)So I guess if you're nesting deeper than one subdomain(e.g. abc.def.example.com) then maybe it'd be easier to get letsencrypt set up than try to get mx records for abc.def.example.com.
Shameless Plug/Disclaimer: I had been working on a tool to make it dead simple to use Lets-Encrypt certificates for CloudFront/ELBs and handled autorenewal via Lambda. I'm not sure there is any use for this now that this exists though.
https://docs.aws.amazon.com/acm/latest/userguide/setup-websi...
> Currently, ACM Certificates are associated with Elastic Load Balancing load balancers or Amazon CloudFront distributions. Although you install your website on an Amazon EC2 instance, you do not deploy an ACM Certificate there. Instead, deploy the ACM Certificate on your Elastic Load Balancing load balancer or on your CloudFront distribution.
Ideally ACM certificate issuance and deployment would be two separate things, and this would be a general-purpose CA, which just happens to have integrated deployment tools for ELB and CloudFront.
The only confusing part was that port 443 was blocked in ELB by default (which made it look like it didn't work but got fixed easily as soon as I figured it out). I've never seen an easier way to do this till date.
As long as AWS provides an API to provision certificates, that would be awesome. I use Nginx, and need access to the private key and cert.
1. Some caveats here. Firstly, I did not have many people to buy out and they were willing to sell at a reasonable price. Secondly, my business is in biotech/bioinformatics and we had put a lot of resources into R&D. This R&D had real value that could be used to bring the business back to life.
[1] http://venturebeat.com/2016/01/20/sidecar-we-failed-because-...
Funny how your mind gets busier to work and build revenue in Europe without a comfortable cushion like SF guys seem to have.
Maybe I am not looking at this right but this part doesn't make sense to me:
In many cases, <2 months is the point of no return. If you are in this state it is immediately necessary to lay off your employees and give them severance, pay down your obligations, and use your remaining cash for shutdown costs.
So is that for companies that had a year+ of runway at some point and are now down to 2 months? What about companies that never had 1 year of runway? The differences between those are pretty big.
For example if you have a 4 person startup and 2 months of runway after being on the market for only 4-6 months, you are supposed to just shut it down?
No, you take consulting jobs and do side work till you can get higher revenue or some financing.
I think, like most startup articles, this applies to companies who have already gotten past seed stage, initial traction and thus is not applicable for 90% of us.
Except, sometimes it doesn't? If you look at the notes[0] at the bottom of The Fatal Pinch:
>There are a handful of companies that can't reasonably expect to make money for the first year or two, because what they're building takes so long. For these companies substitute "progress" for "revenue growth." You're not one of these companies unless your initial investors agreed in advance that you were. And frankly even these companies wish they weren't, because the illiquidity of "progress" puts them at the mercy of investors.
What do you do if you're one of those companies? There's plenty of business models that could be attractive acquisition targets (read: billions), but otherwise can't monetize to save their souls.
Two pieces of advice often encountered (paraphrasing):
"Treat each funding round as if it's your last."
"VC money is like rocket fuel. It's intended to be burned at a high rate."
I imagine reconciling both is difficult at best.
The primary reason to implement functionality in the operating system kernel is for performance...
OK, this seems like a promising start. Proponents say that unikernels offer better performance, and presumably he's going to demonstrate that in practice they have not yet managed to do so, and offer evidence that indicates they never will. But its not worth dwelling on performance too much; lets just say that the performance arguments to be made in favor of unikernels have some well-grounded counter-arguments and move on.
"Let's just say"? You start by saying the that the "primary reason" for unikernels is performance, and finish the same paragraph with "its not worth dwelling on performance"? And this is because there are "well-grounded counter-arguments" that they cannot perform well?No, either they are faster, or they are not. If someone has benchmarks showing they are faster, then I don't care about your counter-argument, because it must be wrong. If you believe there are no benchmarks showing unikernels to be faster, then make a falsifiable claim rather than claiming we should "move on".
Are they faster? I don't know, but there are papers out there with titles like "A Performance Evaluation of Unikernels" with conclusions like "OSv significantly exceeded the performance of Linux in every category" and "[Mirage OS's] DNS server was significantly higher than both Linux and OSv". http://media.taricorp.net/performance-evaluation-unikernels....
I would find the argument against unikernels to be more convincing if it addressed the benchmarks that do exist (even if they are flawed) rather than claiming that there is no need for benchmarks because theory precludes positive results.
Edit: I don't mean to be too harsh here. I'm bothered by the style of argument, but the article can still valuable even if just as expert opinion. Writing is hard, finding flaws is easy, and having an article to focus the discussion is better than not having an article at all.
He voiced all this here [1], and so I countered by listing stuck paradigms in traditional monolithic Unixes, as well as reopening my inquiry on Sun's Spring research system, which he seems to scoff at, but over which I am impressed by the academic research it yielded. He has yet to respond to my challenge.
The biggest problem with Unikernels like Mirage is the single language constraint (mentioned in the article). I actually love OCaml, but it's only suitable for very specific things... e.g. I need to run linear algebra in production. I'm not going to rewrite everything in OCaml. That's a nonstarter.
An I entirely agree with the point that Unikernel simplicity is mostly a result of their immaturity. A kernel like seL4 is also simple, because like unikernels, it doesn't have that many features.
If you want secure foundations, something like seL4 might be better to start from than Unikernels. We should be looking at the fundamental architectural characteristics, which I think this post does a great job on.
It seems to me that unikernels are fundamentally MORE complex than containers with the Linux kernel. Because you can't run Xen by itself -- you run Xen along with Linux for its drivers.
The only thing I disagree with in the article is debugging vs. restarting. In the old model, where you have a sys admin per box, yes you might want to log in and manually tweak things. In big distributed systems, code should be designed to be restarted (i.e. prefer statelessness). That is your first line of defense, and a very effective one.
Here's full-mixed-language programmable, locally- and fully-remote-debuggable, mixed-user and inner-mode processing unikernel, and with various other features...
This from 1986...
http://bitsavers.trailing-edge.com/pdf/dec/vax/vaxeln/2.0/VA...
FWIW, here's a unikernel thin client EWS application that can be downloaded into what was then an older system, to make it more useful for then-current X11 applications...
From 1992...
http://h18000.www1.hp.com/info/SP3368/SP3368PF.PDF
Anybody that wants to play and still has a compatible VAX or that wants to try the VCB01/QVSS graphics support in some versions of the (free) SIMH VAX emulator, the VAX EWS code is now available here:
http://www.digiater.nl/openvms/freeware/v50/ews/
To get an OpenVMS system going to host all this, HPE has free OpenVMS hobbyist licenses and download images (VAX, Alpha, Itanium) available via registration at:
https://h41268.www4.hp.com/live/index_e.aspx?qid=24548&desig...
Yes, this stuff was used in production, too.
The trick though is they did only one thing (network attached storage) and they did it very well. That same technique works well for a variety of network protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an orientation session at NetApp for new employees which helped them understand the difference between a computer and an appliance, the latter had a computer inside of it but wasn't progammable.
I'm pretty sure you debug an Erlang-on-Xen node in the same way you debug a regular Erlang node. You use the (excellent) Erlang tooling to connect to it, and interrogate it/trace it/profile it/observe it/etc. The Erlang runtime is an OS, in every sense of the word; running Erlang on Linux is truly just redundant, since you've already got all the OS you need. That's what justifies making an Erlang app a unikernel.
But that's an argument coming from the perspective of someone tasked with maintaining persistent long-running instances. When you're in that sort of situation, you need the sort of things an OS provides. And that's actually rather rare.
The true "good fit" use-case of Unikernels is in immutable infrastructure. You don't debug a unikernel, mostly; you just kill and replace it (you "let it crash", in Erlang terms.) Unikernels are a formalization of the (already prevalent) use-case where you launch some ephemeral VMs or containers as a static, mostly-internally-stateless "release slug" of your application tier, and then roll out an upgrade by starting up new "slugs" and terminating old ones. You can't really "debug" those (except via instrumentation compiled into your app, ala NewRelic.) They're black boxes. A unikernel just statically links the whole black box together.
Keep in mind, "debugging" is two things: development-time debugging and production-time debugging. It's only the latter that unikernels are fundamentally bad at. For dev-time debugging, both MirageOS and Erlang-on-Xen come with ways to compile your app as an OS process rather than as a VM image. When you are trying to integration-test your app, you integration-test the process version of it. When you're trying to smoke-test your app, you can still use the process versionor you can launch (an instrumented copy of) the VM image. Either way, it's no harder than dev-time debugging of a regular non-unikernel app.
For instance, you could imagine a unikernel that did support fork() and preemptive multitasking, but took advantage of the fact that every process trusts every other one (no privilege boundaries) to avoid the overhead of a context switch. Scheduling one process over another would be no more expensive than jumping from one green (userspace) thread to another on regular OSes, which would be a huge change compared to current OSes, but isn't quite a unikernel, at least under the provided definition.
Along similar lines, I could imagine a lightweight strace that has basically the overhead of something like LD_PRELOAD (i.e., much lower overhead than traditional strace, which has to stop the process, schedule the tracer, and copy memory from the tracee to the tracer, all of which is slow if you care about process isolation). And as soon as you add lightweight processes, you get tcpdump and netstat and all that other fun stuff.
On another note, I'm curious if hypervisors are inherently easier to secure (not currently more secure in practice) than kernels. It certainly seems like your empirical intuition of the kernel's attack surface is going to be different if you spend your time worrying about deploying Linux (like most people in this discussion) vs. deploying Solaris (like the author).
It comes off as a slew of strawmen arguments ... for example the idea that unikernels are defined as applications that run in "ring 0" of the microprocessor... and that the primary reason is for performance...
All of the unikernel implementations he mentioned (mirageos, osv, rumpkernels) all run on top of some other hardware abstraction (xen, posix, etc) with perhaps the exception of a "bmk" rumpkernel.
We currently have a situation in "the cloud" where we have applications running on top of a hardware abstraction layer (a monolithic kernel) running on top of another hardware abstraction layer (a hypervisor). Unikernels provide a (currently niche) solution for eliminating some of the 1e6+ lines of monolithic kernel code that individual applications don't need and introduce performance and security problems. To dismiss this is as "unfit for production" is somewhat specious.
I wonder if Joyent might have a vested interest in spreading FUD around unikernels and their usefulness.
Some additional meat:
- The complaint about Mirage being written in OCaml is nonsense, it's trivial to create bindings to other languages, and in 40 years this never stopped us interfacing our e.g. Python with C.
- A highly expressive type/memory safe language is not "security through obscurity", an SSL stack written in such a language is infinitely less likely to suffer from some of the worst kinds of bugs in recent memory (Heartbleed comes to mind)
- Removing layers of junk is already a great idea, whether or not MirageOS or Rump represent good attempts at that. It's worth remembering that SMM, EFI and microcode still exist on every motherboard, using some battle-tested middleware like Linux doesn't get you away from this.
- Can't comment on the vague performance counterarguments in general, but reducing accept() from a microseconds affair to a function call is a difficult benefit to refute in modern networking software.
The primary reason to implement functionality in the operating system kernel is for performance: by avoiding a context switch across the user-kernel boundary, operations that rely upon transit across that boundary can be made faster.
I haven't heard this argument made once. There are performance benefits (smaller footprint, compiler optimization across system call boundaries, etc...). However, the primary benefit is not performance from eliminating the user/kernel boundary. Should you have apps that can be unikernel-borne, you arrive at the most profound reason that unikernels are unfit for production and the reason that (to me, anyway) strikes unikernels through the heart when it comes to deploying anything real in production: Unikernels are entirely undebuggable.
If this were true, and an issue, FPGAs would also be completely unusable in production.The smaller point about porting application (whether targetting unikernels that are specific to a language runtime or more generic ones like OSv and rumpkernels) is the most salient, it will probably restrict unikernel adoption.
For docker, if only to provide a good subtrate for providing dev environments for people running windows or Mac computers, it is very promising.
In particular, I am suspicious of the idea that unikernels are more secure. Linux containers make the application secure in several ways that neither unikernels nor hypervisors can really protect from. Point being a unikernel (as defined) can do anything it wishes to on the hardware. There is no principle of least-privilege. There are no unprivileged users unless you write them into the code. It's the same reason why containers are more secure than VMs.
Users are only now, and slowly, starting to understand the idea that containers can be more secure than a VM. False perspectives and promises of unikernel security only conflate this issue.
That said, I do think the problems with unikernels might eventually go away as they evolve. Libraries such as Capsicum could help, for instance. Language-specific or unikernel-as-a-vm might help. Frameworks to build secure unikernels will help. Whatever the case, the problems we have today are not solved or ready for protection -- yet.
This blog post was clearly spurred by the acquisition made by Docker (of which I am alumnus). I think it's a good move for them to be ahead of the technology, despite the immediate limitations of the approach.
Putting that aside, debuggability is an obvious and pressing issue to production use-cases. Any proponent of unikernels that denies that should be defenestrated. I haven't come across any that do.
How to go about debugging unikernels is unclear because it certainly is still early days. However, I don't think the lack of a command-line in principle precludes debuggability, nor does it my mind even preclude using some of the traditional tools that people use today. For example, I could imagine a unikernel library that you could link against that would allow for remote dtrace sessions. Once you have that, you can start rebuilding your toolchain.
P.S. Bryan, where's my t-shirt?
As a security engineer, that's a good one sentence summary from my point of view of unikernels, since, forever.
I think the reason why unikernels are being developed is due mostly to ignorance, and if any of them is successful, it will morph into an OS that is closer to Mesos, Singularity, or even Plan9. That's faster, safer, more logical, etc.
> And as shaky as they may be, these arguments are further undermined by the fact that unikernels very much rely on hardware virtualization to achieve any multi-tenancy whatsoever.
Multi-tenancy is needed in some cases, but I don't need it, we use the whole machine, and other than the one process that does all the work, we only have some related processes for async gethost, monitoring/system stats processes, ntpd, sshd, getty.
For Joyent of course they have a book to talk up and they want to sell you their own solution which looks more like containers than a hypervisor. The Joyent solution is I think undoubtedly very interesting and well-considered but I have a suspicion that they've hitched their wagon to the wrong horse and Linux will keep winning.
For a long time the dominant programming environment for IBM mainframes has been VM/CMS, where VM is something like VirtualBox and CMS is something a lot like the old MS-DOS, i.e. a single process operating system. Say what you like but it was a better environment than anything based on micros until you started seeing the more advanced IDEs on DOS circa 1987 or so.
Now the 360 was a machine designed to do everything, but it's clear the virtual memory in most machines is an issue in terms of die size, cost, power consumption and performance and I wonder if some different configuration in that department together with a new approach to the OS could make a difference.
Joyent doesn't sell unikernel services, hence unikernels are bad. Color me shocked. Is it me, or has Joyent become less than upfront about their motives over the last few years? I don't require everyone to embrace "don't be evil" or whatever, but I always get a "righteous" vibe from Joyent employees that seems at odds with their actual behavior. Maybe they feel under siege or whatever, and are reacting to that? The whole thing is vaguely off somehow.
I'm confident this will be addressed eventually. Anyone have a sense of what that will look like? Something like JMX? Something like dumping core, restarting, and analyzing later?
Either the article is written in the context of writing kernel software, which wouldn't have much of an impact on my decision to run my application on a unikernel OS or not, or QNX is a far outlier from other unikernel OS's and that's why I'm so confused.
I hadn't any experience with unikernels (still student), but there are few concerning things about them. And the main thing is that those things that are concerning are at it's core.
I have only respect and admiration for Mr. Cantrill, but this post felt kinda strange. After reading the last paragraph it sounded like and ad. Maybe they got scared of Docker possibly expanding and taking part of their cookie. I don't know, but these discussions were interesting to read at least...
https://www.google.com.au/trends/explore#q=unikernel%2C%20li...
So does this mean something like a Symbolics machine or an Oberon machine can't be debugged, or does this mean that the unikernel has to be debugged at a higher level by the application(s) it's dedicated to?
Given how invested Joyent is in their current positions, I can see why Unikernels may seem a threat, but none of the things Cantrill has raised as concerns seem insurmountable.
Unikernels are young, and lack tooling/robustness that we have in more traditional approaches. They are not production ready yet, but will likely become a prominent way of building and deploying applications in the future.
Crude or anemic? The program does what you want or it doesn't. Quit trying to make it a human.
Edit: If the author can believe programs are crude or anemic he clearly likes to look at them from a high level, but you need a low level view to get excited about unikernels.
Edit: What?
As mentioned numerous times, there's the whole "reaching space" vs "going into orbit" before landing.
More important to me is the fact that SpaceX is streaming its different tries in _live_, taking the risk of crashing the rocket out in the open. How many vehicles did BO lose before achieving a vertical landing ?
Oh, and what about the fact that they have total control on the location and time of the launch ? Meaning they basically control weather to an accuracy no one launching anything useful into space has. For example, last failed SpaceX landing was officially linked to fog icing the leg locks. That's not going to happen if you launch on a clear day from the desert.
These are more comparable to the Grasshoper tries than to anything SpaceX has done recently : no horizontal speed, full weather control, no reporting on failed attempts, very limited weight. Even the last grasshoper video seemed to have more side winds that had to be countered than this 100k altitude video.
Even the format of the video itself screams "vaporware" to me. It looks like a trailer for a bad action movie, where some spacey something goes to space, separates and lands back in 15 seconds. When the grasshoper videos left me in awe, looping over them 5 times in a row, the BO ones just make me feel like they sh/could end with some sexual innuendo over their big rocket
I am impressed by both companies' ambition, and SpaceX clearly has both the time and money advantage over Blue Origin. Let your accomplishments speak for themselves.
Form: https://what-if.xkcd.com/58/
> The reason it's hard to get to orbit isn't that space is high up.It's hard to get to orbit because you have to go so fast.
A popular sentiment in that industry is that rocketry is like writing software composed of many modules and testing each module separately on mac, then deploying the entire build on linux. If it doesn't work, you don't just back out the conversion error or stray quotes you left in, your rocket explodes.
The engineering spend alone is massive, as is the damage to the company when a failure is syndicated across youtube. Taking big risks is something we should be promoting.
We are in a technological renaissance and it starts with lowering launch costs to achieve realtime LEO satellite blanketing and distributed communication channels to connect to the other fucking 3 billion people without internet. Bezos is accomplishing something great, and we don't need to qualify that statement.
He and Musk are definitively the Jobs and Gates of the 21st century if you want to use the obvious cliche.
What Gates did. What Jobs accomplished. It was pretty fucking powerful. Musk and Bezos are sort of doing that, except both are working in at least 3 industries at that same scale.
I wish Blue Origin, Sierra Nevada, Firefly and all the other people in new space well. Nano-sats will provide realtime insight to the earth, people will be able to own a satellite in ~5-10 years because of these advancements.
this is good for all of us, and the only negative thing to say about it is that for god sakes Jeff, that rocket does look a bit like a stubby penis.
Blue Origin = Rollercoaster.
I really don't see why these companies are competing. They are in totally different markets. Sure, there is some technological crossover in that they both use rockets, but this is like comparing a prius to a locomotive.
When you have only a few seconds above the "official" space altitude on a parabolic trajectory, I wouldn't say" working and living in space", and specially not "millions" at the same time..
Is it me or this is primarily a pitch video ?
Now, that he is getting closer to having tourist flights outside the atmosphere than Virgin Galactic? That is pretty cool and a fair comparison. Being able to out execute Burt Rutan? That counts for a lot, but don't try to compare yourself to SpaceX until you're putting things into LEO and getting back the hardware to use again.
https://www.blueorigin.com/astronaut-experience#youtube-YJhy...
That said, I didn't get to watch the launch and join in its success or failure, so I'm finding it difficult to actually care as much as other launches.
Also, can people please stop knocking Blue Origin. We get it at this point, okay? I'm a huge fan of SpaceX and Elon Musk but does Blue Origin have to lose for SpaceX to win? No. There's nothing in this post from Bezos bashing SpaceX as far as I can see. There's simply saying, look, we did it again with the same refurbished rocket. Good on them. May they do it again and again. And so may SpaceX. The next space race is on, happy days!
edit: got my facts straight
Yeah, Docker is about to get some enhancements for sure. Maybe some real security improvements, too. You can count on it.
There are two touted benefits of unikernels, performance and security. Performance turns out to be a red herring, as the overhead of an OS vs a Hypervisor turns out to be roughly equivalent (with the OS actually winning in some use cases).
Security is definitely an issue, but it's so abstract. My company is a compliance (a very specific industry's compliance) cloud provider and we have gone with Docker as we get to use the OS as our Hypervisor, which means it is much more extensible and, in our use case, secure as we are able to auto-encrypt all network traffic coming out of the hosts with a tap/tun virtual device.
Two things need to happen to make unikernels attractive. A new Hypervisor needs to get made, one that is just as extensible as an OS around the isolated primitives. It should also have something extra too (like the ability to fine tune resource management better than an OS can). Secondly a user friendly mechanism like Docker needs to happen.
I'm not very hopeful given that their CTO is quite open about wanting to embrace-extend-extinguish competing technologies. This move embraces unikernels, and now they are perfectly positioned to go the rest of the way.
The discussion at https://news.ycombinator.com/item?id=10904452 may shed light about my complaint.
There's a reason Docker is so heavily funded by the biggest cloud companies. They're the ones who stand to benefit from specialized Docker container optimized for their own platform. It's a great way to package open source services and leverage the effort of the developer community into centralized profit.
It seems blatantly obvious that Docker is looking to build the app store of devops. I wish them the best of luck, but they are going to face some heavy resistance from open source initiatives. There is nothing about Docker that makes it fundamentally superior to the systems it's based on, specifically the LXC project. When developers finally wake up to the fact that they are sleep walking into a massive walled garden, Docker will lose some of its clout.
[1] http://unikernel.com/#notice not to be confused with the community website at http://unikernel.org :)
I've been following the Mirage and rumpkernel lists for a while and its nice to see these hackers getting traction (and money!) for their efforts.
Not too long ago unikernel.org was started, which IIRC was billed as a community driven "one stop shop" for information on the subject, which I assume is independent of the company "Unikernel Systems". Hopefully Docker won't go rogue and start attacking others that use the term "unikernel" by claiming that its trademarked or something like that.
Congratulations Amir et al!
I didn't expect unikernels to gain mainstream notice for at least 6 months to a year.
Other than reducing complexity, our distributed database use the virtual memory hardware in a unique way, so a mono kernel was essential.
Having said that, the easiest way to develop such a system is not on the bare metal, it's by running Linux in such a way that in only uses the first 1 or 2 cores, and then running your "custom kernel" on any other cores in the system. Then you can use a normal debugger and utilities during development. It's only when you actually want to put it into production that you can consider not using Linux at all.
[0] https://archive.fosdem.org/2013/interviews/2013-antii-kantee...
Were the terms of the deal disclosed?
Perhaps Unikernel Systems ran out of money and it was an "acqui-hire"?
> The result of this is a very small and fast machine that has fewer security issues than traditional operating systems (because you strip out so much from the operating system, the attack surface becomes very small, too).
Obviously traditional operating systems provide a lot of interfaces that represent attack surface, but they're generally able to be secured. On the other hand, much of the operating system actually _implements_ security, so if you throw it out, you're losing that.
Very nice site! Since your site is so much based around search, I thought I would pass on a few suggestions based on what I saw. If you happen to be using a search based engine for your content such as ElasticSearch, SOLR or maybe Azure Search :-), there are a few simple things you could add to make the experience a little smoother. Suggestions in the search box are nice to allow people to quickly see results as they type. You could even add thumbnails of the images in the type ahead such as you see using the Twitter Typeahead library (http://twitter.github.io/typeahead.js/). I also noticed that your search does not handle spelling mistakes or phonetic search (matching words that sound similar). Finally, through the use of Stemming, search engines can often help you find additional relevant content. For example, if the person is looking for mice, but your content has the word mouse in it, this will bring back a match. Since you don't have a lot of content, this can really help people find relevant content.
Hope that helps.
I'm particularly uncomfortable with Flickr's "no known copyright restrictions". What if people infer PD from that and upload it somewhere else under CC0? Then it gets sucked into this finda.photo? Yuck.
As for finda.photo, why are you truncating the source down to just a domain name?! Many of the sources include proper uploader details so why aren't you copying those over and displaying them?
I know you're not required to, but attribution isn't a bad thing if you can give it. I for one would be much happier using a photo if I knew exactly where it came from.
Disclaimer: I work for the company behind GraphicStock. Oh, and we're hiring!
- Phones aren't going to replace credit cards- You will need to type in all your passwords each time you use them- Two Factor authentication will need to be done with a different device- Healthkit and other medical records will need to be moved elsewhere- Any profession where there are very serious consequences for leaked communication will no longer be able to do it through their smartphone (lawyers, doctors, executives.)
Basically losing or having your mobile phone stolen will be equal to a burglar pulling up to your house or office and driving away with every sensitive document and record in the back of a van.
No tech company wants to see the end of the mobile revolution. Forget the national interest side to this, anyone supporting broken encryption basically looks like a total moron.
Hopefully there's a primary challenger or soon will be, I'll donate.
[1] https://en.wikipedia.org/wiki/Elk_Grove,_California#Top_empl...
[2] http://www.bizjournals.com/sacramento/news/2015/12/07/someth...
{note: [2] gives a significantly larger current headcount than [1]}
A few questions I posed to the NY senator earlier this week:
1. Would you use such a phone knowing that the government / apple / seller of the phone could easily get into it.2. Would it be legal for someone in the legal profession to use such a phone without being disbarred for negligence of the right to private communication?3. If sold unlocked, and then later locked (i.e. every phone right now), where's the change?4. Where's the 4th amendment fit in with this?5. What should we do with old phones that don't support this? Dump them in the bay I guess?6. Where are the technical experts that are telling you that this is actually feasible to do securely and safely? I'm looking hard, but only seeing negative responses from those that know what their talking about.7. Who's responsible for fixing the broken device once the master key gets leaked? The manufacturer? The state of {CA/NY}?8. the list goes on.
When the legislature wants to do something unpopular (or even stupid which is what this is), associate it with the "Evil Of The Era" and propose the bad legislation as the solution to said evil. These days, popular "Evils" are Human Trafficking, Child Porn, and "Terrorism". The first two evoke extreme emotion of crimes committed against the most innocent of victims, so they're the best choice in this scenario. In the 80s-90s it was anything to reduce "Crack Babies" or win "The War on Drugs".
It's an old trick -- when people talk about logical limits placed on the first amendment, you'll hear the phrase "Shouting Fire in a Crowded Theater". Most of those who utter it don't realize that this phrase originated as part of a ruling that had nothing to do with "fire" or a "crowded theater" but was made to curtail the dangerous speech of opposing the draft during World War I[1].
[1] https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...
It doesn't say how, and it doesn't give a time frame.
So: Provide an API to accept a key. Allow two key attempts per second. Start with key 0x0000..000, next try 0x000..0001. This is guaranteed to complete, you just have to be prepared to wait a while.
(Yes, I know that courts are unhappy with this kind of thing. But the bill is a crappy bill, in many regards).
Do you want crypto to work? Or do you want to be forced to replace crypto with security theater? Is your business actually willing to actively protect a free internet? Or is it easier to assume this is "someone else's problem"?
I guess we will see which companies defend themselves, and which companies think being a collaborator is more profitable?
Shall is the source of more litigation than any other single word in the English language. It can always be debated because no one knows if it reliably means "can", "must", "may", "might", "will", "should", "ought to", or "is allowed to".
All the above uses can be supported with evidence. Because language evolves.
It's killer word for any law or contract and guaranteed to be disputed.
I am not a lawyer, btw.
But if this somehow passes, it will get tossed because of the wording.
As a matter of fact, I'm certain that current leaders of the EU countries who publicly invited immigrants to their state (we all know the most prominent one), was considering this as a easy way to change the privacy laws - and be applauded for it.
Do we start referring to encrypted devices without back doors as contraband?
"...race actually turned out to be more significant than a criminal background. Notice that employers were more likely to call Whites with a criminal record (17% were offered an interview) than Blacks without a criminal record (14%). [2]"
So all the people acting as though our society is some meritocratic, utopia can keep that bullsh*t to themselves.
On the other hand, there is no doubt that blacks under-perform relative to whites when it comes to academics. There are obvious reason for this, but those reasons don't change the truth. Companies that are heavy on the engineering are going to use academic markers to try and select the best of the best. There aren't enough blacks >= white peers in the top percentiles of CS to give us proportional representation.
[1] http://www.nber.org/digest/sep03/w9873.html[2] http://thesocietypages.org/socimages/2015/04/03/race-crimina...
My institution was heavily recruited by big corps, government labs, and east coast companies.
The "best" students, by GPA, were in high-demand for all of the above. Many were heavily recruited into management tracks for non-IT companies. A large number of government institutions and defense contractors were also eager to land new grads from our school. The "best" students, by hacking skills were (maybe stereo-typically for hackers) less interested in classes that didn't involve slinging code, but also all landed programming gigs. Less committed students, from either metric, seemed to still be getting jobs but I can't generalize as to the job type.
I think it is fair to say that my undergrad course-work was not as demanding as (guessing a bit here) Stanford, MIT, or CMU. But my GPA and GRE scores landed me multiple job and graduate school offers.
One aspect of hiring from (or at least my) HBCUs is that there is very strong network effect - alumni come back to the school and recruit interns and fulltime positions for their companies, help prep students for the process, and students looked to those alumni as trusted sources.
If Silicon Valley really wants to hire from HBCUs, that is the path I would recommend. Hire a few alums from the HBCUs and make recruiting and grooming candidates a priority for those alums.
I find this sentence really shocking, perhaps because I'm french and in France we try to assimilate people more (I don't really know), but I would definitely think that as a white software engineer I have a lot more in common with the black software engineer working with me than with whatever random white dude.
This statement may be true if "people of color" means African American. Otherwise, it is just not the fact. I do think, through my personal experience, the Valley is probably the most diverse place that I have been. I've seen people all over the wrold here: Asian, Latino, European, etc.
The answer is buried three quarters of the way into the article:
> When they started interviewing seniors, companies found as Pratt did at Howard that many were underprepared. They hadnt been exposed to programming before college and had gaps in their college classes.
So why isn't the article titled, "Why Aren't Enough Black Coders Prepared for Silicon Valley"?
It would also be interesting to look at selected majors across ethnic groups. I suspect that blacks go in to CS at a lower rate than other ethnic groups.
With very minor changes, they could have used the correct words to cover the topic they really wanted to cover: that there are fewer black programmers in SV than is desired/expected/needed. And that is a topic that deserves discussion. But because the author minimized the experiences of a huge number of other minority groups rather than focusing on the concerns at hand, we are now squabbling about essentially irrelevant material.
95% of the article could remain intact. By cutting the 5% which is both fluff and offensive to other groups, the rest of the article would be much stronger.
Secondly, who cares what schools top tier companies are targeting. If Howard is churning out good software engineers that are so good they cant be ignored, a) they wont need Google, et al to hire them b) their skills will speak for themselves when they apply for a job
It seems like so many people (black, white, Asian, etc) actually buy into this socially constructed division by culture or skin color which is completely insane to me. To me it's like dividing people into groups by eye or hair color and saying you feel unwelcomed by the blue eyed people.
Articles like this seem to reinforce the notion that there is this 'otherness' of culture and skin color. If Google ,Facebook, etc are ignoring software engineers that are top notch from Howard and other historically black colleges, that would be a problem, but I doubt that is the case. Most companies want people that can get the job done well and know their stuff in my experience (I've worked in Silicon Valley and Fortune 50 companies).
The article seems to repeatedly make the point that the black people at tech companies were feeling out of place while working at Google, etc. as if any Indian, Asian, or White person do not experience the same thing (someone from India will have to learn the culture of SV just like someone from Howard Univ. or some white person from Alaska). Who cares if you dont watch the same TV shows or read the same books. If anything , I think thats a good thing, as its a starting point to learn more about something you havent experienced. I think the most important thing is mindset and attitude going into situations like this. Curiosity and open mindedness would do wonders for the people in the article who feel like 'others' in SV.
I dont feel like the culture of SV is as homogeneous as they are trying to project, but this 'otherness' is the real projection
I've never been openly been discriminated against, or felt like the color of my skin had anything to do with my success in Silicon Valley or the East Coast while working at tech companies. I've found almost all people of all 'races' to only care about competence and efficiency (other than the occasional jerk or misanthrope)
It's tough to describe the feeling, but when you're the only black person in the room, you do feel different--a little uncomfortable. However, I don't think this reflects a conscious effort to not hire blacks, rather there are institutional and socioeconomic barriers that leave us underrepresented in tech and many other fields.
The school happens to be historically black but I'd be surprised if you found hiring statistics from a majority white school with a similarly ranked CS program to be substantially different.
Black engineers would prefer to work in IT at a big bank with high steady pay than opt for the highly variable risk/return profile of being an engineer in SV.
And why do they do this? If you look at poverty being an overriding theme for blacks in America, even if they themselves are not poor, then one would clearly prefer a lower risk med/high reward job than an a high risk low or super-high reward job.
Now the above only explains why black Americans underparticpate in start-culture. It says nothing for why they are underrepresented at high paying low risk shops like FB, Google, YHOO, Salesforce, ORCL, etc. Unless, of course, if you need to have first slugged it out at a few start ups before getting a job at a bigger shop. I'd say that's maybe only true for parallel hires and not kids right out of university.
Also, the problem with Howard not being a top tier school applies to every school that is not in the top tier. Many do not even get the same access to recruiters that Howard does.
I also believe schools should be teaching fundamentals and theory and not be used as job training.
That said, the assimilation problem, "cultural fit", is real, but is often neglected. Many programs trying to fix the minority inbalance simply focus on outreach, the recruiting pipeline.
It's actually too Asian. And too Jewish. That is, if you're using, you know, math, and a simpleton's understanding of demographics. If you're using contemporary ethnic racketeering, then yeah, it's too white. Even the NFL is too white.
I actually think it would be funny to see Bloomberg come out with an article demanding that fewer Asians and Jews be hired wherever they excel.
Offtopic - but Howard has an amazing marching band. They played Rutgers when I was in school (football), and my favorite part of that game was the Howard band at halftime.
It very well could be that the article misrepresents the efforts of Prof. Burge and the Howard staff since the article is focused on SV.
But if SV is turning away energetic, engaged, intelligent and capable new recruits, then please send them to NYC, Seattle, Chicago, Philadelphia, Triangle Park, LA, or anywhere else where companies are looking to hire.
It might not give you a "direct impact" on SV itself, but it does get your people into good paying jobs where they can further develop their skills and experience (especially for those without a long childhood of working with computers). It's a small industry. Soon enough these graduates will be attending conferences and making an impact on this culture.
More importantly, they'll also be representatives in their local communities, helping to inspire the next generation of students who don't see themselves or their experience reflected in this industry. And perhaps that next generation will be more likely to pick up programming in middle school.
This could be used to reinforce the thought that the majority of people that care and judge based on color are mostly African Americans.
"More than 20 percent of all black computer science graduates attended an historically black school, according to federal statisticsyet the Valley wasnt looking for candidates at these institutions."
Ah news for you , they are also not looking at candidates from my community schools.
Perhaps the article should be why you shouldn't attend a non racially diverse university or a university that doesn't attract employers in your field?
White founder has a business idea and they bring along their friends - most likely white. Those friends bring in their friends and colleagues - also most likely white - to become the executive team. The executives hire tomorrow's managers. By that time the vast majority of employees are white, and even if they work very very hard to hire black people, it will take a very very long time until there is proportional representation all the way up to the executive level. Some execs work well into their 80s, meaning that it could take more than a century until there is population-proportional diversity at any predominantly white-founded institution.
The longer a lack of diversity persists in a company's trajectory the harder it becomes to fix it. The only solution I can think of is for black people to start more companies themselves.
That's the core issue right there. The school hasn't adapted to the technology and practices. What use would you be on day one if your coding knowledge was stuck in 1991?
i dont like this attitude, i am not sure what is should be called, but you should feel relaxed in your own skin, accept diversity, and dont mind it when most of the people around you, are not the same color
why is it unwelcoming ...maybe not what you hope for, but why call it so negatively
Look, the US is filled with businesses in big cities that, while are not "software companies", do very much need to write software to conduct operations. Take Houston, Texas for instance. It doesn't matter where you came from, what you look like, or who your daddy is. The game is supply and demand. If you can supply, you are in demand. If you are a native English speaker, then you are already ahead. In this country, if you are willing to move, work your ass off, and actually like programming - eventually you will reach gainful employment. Especially if you can pass a drug test. The first year? Hell no. Look for the hardest shit you can find that people with no patience think they are "too good for", and you will be filling in your experience in no time. Life is not easy or fair. If you are smart enough to do even some half ass programming, you have been given a gift.
Generally, if one wants to get into computing in a highly competitive environment, they should attend a top computing university. Fortunately there are top schools that are public as well as private.
I rarely see blacks (or Hispanics) at computer Meetups in NYC. For that matter, at many computer Meetups, there aren't so many women either.
I remember my own struggles to break into the tech business, many years ago now. Although white and "privileged", i.e. no cultural barriers to entry, I found it very tough and had to jump through hoops, work my way up from semi-tech to actual development positions. I took night school courses on a credit card and got into debt. I bought whatever gear I could afford and stayed up until 3am writing code, then got up and went to my menial job.
The opportunities didn't just fall in my lap; I had to earn them. No glamorous technology titans came knocking on my door, begging me to come interview. I had to work for everything I got, and God, it was hard. It still is.
This same work ethic applied to everyone; I was on the chatboards in the late 80s, all through the 90s, and the 2000s, and the story is always the same. You have to have the right stuff if you want to build a career in technology -- be smart, creative, have some initiative, humility, humor, etc.
So maybe Black Americans don't get that in their upbringing. Maybe they're not taught to be smart, competitive, hard charging over achievers. Maybe they're not encouraged to be creative, to think outside the box, etc. I don't know. What I do know is, you can't compensate for that by handing people undeserved opportunities.
Affirmative action is a failure; it's nothing but a form of welfare. If Google reaches out and hires under qualified people from Howard or wherever, just to say it's trying to overcome "barriers" and achieve "diversity", that's all doublespeak that in the end means "We will hire a few token blacks because we have extra money. It will make us feel good, and it will fool them into thinking they made it. Whatever. We have to do it."
The easiest way to prove this is by looking beyond the software engineering field.Why do big SV companies like Facebook, Google or LinkedIn not have black non-engineering staff?
Are capable black accountants, project managers, lawyers, support staff non-existent too?
http://www.nytimes.com/2015/09/04/technology/silicon-valley-... ...
"Google revealed that its tech work force was 1 percent black, compared with 60 percent white. Yahoo disclosed in July that African-Americans made up 1 percent of its tech workers while Hispanics were 3 percent."
Affirmative Action has worked in other industries.
Why does it fail in Silicon Valley?
Please don't do this here. The HN guidelines specifically ask you not to: https://news.ycombinator.com/newsguidelines.html.
We detached this subthread from https://news.ycombinator.com/item?id=10945107 and marked it off-topic.
To see it visually, this is the bell curve based on millions of test results: http://i.imgur.com/zB1oENS.png?1 There are very simply very few black people in the far-right portion. This is not even a disputed fact (the dispute is mainly over why the curve is skewed and if it can be fixed; the existence of the skew is incontrovertible).
This was at least partly because of the way companies recruited: From 2001 to 2009, more than 20 percent of all black computer science graduates attended an historically black school, according to federal statisticsyet the Valley wasnt looking for candidates at these institutions.
The average SAT scores at Howard are thoroughly mediocre, on par with second and tier state colleges, and you would not expect an elite school to concentrate on Howard, any more than you would expect it to concentrate on Southern Illinois University or the like. The only reason an elite company would recruit at Howard is for diversity reasons.
Foreman is strong-willed, which sometimes gets him in trouble. I just chalked it up to soft skills, I guess, he says, explaining that he and his interviewer had clashed. Pratt says hed been furious to learn that Foreman had been passed over. Other companies said no, too.
So was he a good programmer or not? How do we the readers know that he good do the job and was passed up unfairly?
The phenomenon, stereotype threat, is getting more attention in the Valley, and companies have begun training employees to be aware of it.
The idea of stereotype threat is extremely dubious - http://isteve.blogspot.com/2012/10/john-list-on-virtual-none...
She doesnt fit the profile of what people think of when they think of engineers. Even though people think of Silicon Valley as a big meritocracy, I dont think thats how it works.
There are now a number of companies that do automated programming interviews -- Starfighter, Hacker Rank, etc. Do these manage to overcome stereotype threat? Do these blind interviews allow through more African-Americans? Before throwing around slanderous accusations, one should actually show that Silicon Valley is treating people with the same programming ability differently.
The sad thing is that these tech companies cannot just admit, "We don't recruit at Howard because the SAT scores are not there." Rather these companies have to pretend that anyone can be a great programmer if they just put in the work, and a lot of people end up with false hopes that only get crushed.
Mongols! Light cavalry using composite bows was both unbelievably effective and hard to copy for everybody but steppe nomads. All Mongols were hunters, they practically lived with their bows on their horses. So the whole population could do warfare.
While back in the days, in both Eastern and Western Europe, contemporary warfare was rotating around heavy cavalry, and one can't have too many knights. Even if somebody managed to gather an army more or less comparable to Mongol hordes - heavier cavalry would just be meat for lighter riders making circles around them.
Besides, feudal lands never managed to be centralized enough to counter mongols. In medieval Rus' the need to centralize led to the rise of Moscow - and it took quite a while anyway.
The abstract does a wonderful job. It is will worth reading.
Longbow was cheap and technically superior, but required training. Crossbow more expensive, required less training. Rulers of England less worried about rebellion, OK to invest in training. Rulers of France/Scotland not so happy because of fear to give potential of overthrow to the people (Scotland not in title, but in article along with France).
Perhaps an analogy could be painted with companies today. Those that churn, and those that nurture skills.
He is also very knowledgeable about all things from this period of history: https://www.youtube.com/watch?v=hUYd6pNy6QU
There is also some more detail regarding the make up of the "English" army bowmen here: http://www.bowyers.com/bowyery_longbowOrigins.php
Sidenote:
- Bowyer: Makes the bow
- Fletcher: Makes the arrow
- Stringfellow: Makes the string
- Arrowsmith: Makes the arrowhead
I admittedly did not read carefully the whole paper, but this possibility does not seem to be addressed.
And never mind instability, the French also weren't as geographically isolated as the English and thus it was easier to hire mercenaries. Genoese crossbowman being a particular example.
The penetrative ability of the longbow is also greatly exaggerated, citing a book that did some pretty shoddy testing (flat sheets of poor quality metal used as targets, but hardened bodkins as penetrators, 10m distance, no padding).
Speed of reloading is another advantage the longbow has, but I think this article overstates it. While some crossbows do require using a stirrup or crank to load them, there are others that you can reload against your hips, and shoot from there, to increase your speed considerably, at some cost to accuracy. I know people who have managed to get 6 bullseyes at 20 yards on a crossbow in 30 seconds. Meanwhile, archers would not be firing at the maximum possible rate in battle; ammunition is a limited resource, and with the draw weights of warbows fatigue would set in quickly. Overall, with the archers they had and bows they had at the time, it is likely that the longbows were able to be a little faster than the crossbows, but it's not a night and day thing; and the range, accuracy, and penetrating power on the crossbows were better.
The simplicity became an advantage in a few battles, which came after substantial rainstorms that caused problems with crossbows more complicated mechanisms. But the main advantage was how cheap and fast to produce they were; you could easily arm a large populace quite quickly. In order to take advantage of the longbow, you had to do that; you needed a very large number of archers to effectively take advantage of longbows, while you needed fewer archers to be effective with crossbows. But because it was cheap and simple, it was feasible to do that.
I think that cost and simplicity of the longbows were their biggest advantage; speed perhaps a secondary factor, but the sheer numbers were likely to be more important.
There is, of course, an interesting parallel here with some trends in modern military spending. The Joint Strike Fighter is a technological marvel; one of the most advanced pieces of military equipment ever. However, they are staggeringly expensive, and not actually the best dogfighters in the sky. You wonder how much more effective spending that money on more and simpler weaponry might have been.
https://www.goodreads.com/book/show/1195105.The_Armourer_s_H...
However I've never found a copy online, which is a shame because I remember it has being one of his best.
Longbows are great in open battle, yes. But the hundred years war was a war of sieges and raids (by the English and the great companies), and for those the stonemason is infinitely superior to the longbowman.
For the French, the winning strategy was always to avoid pitched battles and fortify river crossing points until English armies had run out of supplies, then patiently retake lost fortified places through siege.
Didn't England lose the Hundred Years War? At least looking at the map before and after - it lost everything on the continent, incl. last remnants of Angevin and Normandy lost to France, with France rising up significantly bigger and stronger as a result of the war.
While long bow is a nice nostalgic weapon, the crossbow is technologically more advanced, and in our civilization technology wins :
"Plate armor that could be penetrated by large crossbows, but was impenetrable by longbows, was uncommon in Europe until about 1380"
(funny that while a child i was initially making bows, yet soon switched to making crossbows - and they were interesting until i made my first single shot handgun at the end of the 1st grade :)
And when they didn't have the right terrain, those English longbowmen also lost plenty of battles. They had some spectacular victories at Crecy and Agincourt, but they also had their fair share of losses.
Excellent weapon, but no silver bullet.
A population able to defeat the infantry technology of the time requires a different social and legal position than one that doesn't have that ability. That is, the government needs more cooperation and consent of the governed. That government is stronger than other governments, because it can kill their armies. But it is more dependent on that population and so cannot abuse it in the same manner as those other, "weaker" governments.
In the end I shot at a 1.5 meter target about 100 meters away. You could barely see it, as it was lying flat on a hill. At first I could not believe that I had the power to get that far, but it worked out. I missed it by about 12 meters, which was not bad looking at the competition that day.
The rest of the world couldn't enact such rules and thus could not make the Longbow a successful military weapon. It required years of training, not something you can do to a soldier who just got conscripted.
Also, it does make sense that training long term military personel was reserved to the ruling feudal class. Still, producing a longbow compatible population (of strong, loyal) men might have had other costs then political ones. Precision wise, a longbow is not a real tournament weapon - and you needed tall, strong men to wield it.
I learnt "canne d'arme and baton d'arme" the "fencing of the i-gnobles".
From feudality to absolute monarchy the raise of monarchy has been made at the costs of "Jaqueries". Peasant revolts of the "non nobles" "ignobles" in latin derived french.
The central control brought by the carolingien and then the bourbon as resulted in strong traditions: knights and nobility are also a force to squalsh revolts.
This and the dissolution of Lances towards "regular armies" after azincourt defeat (longbow involved) has been used to cut the fraternity at arms between feuds members. (Lances were like organic units of versatile men at arms doing their best to bring everyone alive the local feud included).
The strength of the knight were enforced like in feodal japan, by preventing the crowd to gain power.
For this, metal was considered the weapons of only knights.
Which means that when using the old franc laws for something as rude as sullying a women in a church out of the accepted "traditions", the divine judgement could be called ... a duel.
Needless to say peasants were not authorized to have metal ... officially.
So with all the jaqueries going on, you don't really want the peasants to have weired ideas about efficient wooden weapons.
And still monarchy was a vast joke at this time and era, cousins of the royal families were lending each others money, and were often tight by blood.
England had no interest to destroy the french society.
French kings had no real interest in defeating england. They were mainly aiming for weakening the local suzerain. The feuds.
Of course it backfired. Louis XIV almost get killed during the "fronde".
Back then, they even had a low-bandwidth version of the site: http://web.archive.org/web/19990117023050/http://www.nytimes...
The website included various tutorials on how to use it, including a guide that covers the different browsers. None of the browsers listed are actively developed today: http://web.archive.org/web/19961112182937/http://www.nytimes...
edit: A couple of other observations:
- How many other content websites have published for nearly this long and yet have their oldest articles remain on their original URLs? Most news sites can't even do a redesign without breaking all of their old article URLs.
- I like this Spiderbites feature -- a sitemap of a year's worth of articles (likely for webcrawlers at the time): http://spiderbites.nytimes.com/free_1996/
You can even find quite old articles from key historical times and they're presented just like articles today. For example, the famous Crittenden Compromise is at http://www.nytimes.com/1861/02/06/news/the-crittenden-compro...
The electronic newspaper (address: http:/www.nytimes.com)
hilarious, butthen againthe colon-double slash still isn't clear to most people.> The New York Times on the Web, as the electronic publication is known, contains ..., reporting that does not appear in the newspaper, ..."
raises hand
Good memories... Claris Home Page!
A quick start:
* The separation of different forms of content: They don't really mix text with video, images and graphics, even though most web-native bloggers will do it. They seem to lack fluency with mixing media; it's a project for them. They'll staple a video and decorate text with images and graphics, but they don't really commuicate with it; they don't say, 'here's how Clinton responded to Sanders:" <video>, or, 'here was the scene when the earthquake struck' <video>, or even in a movie review, here's what the scene looks like: <video> or <image>. Instead, they try to describe the visual with text. Even explanatory graphics are a separate, special production, on a separate page.
* The font in their title: Back when printing fancy fonts was a technological feat, this font communicated that they were serious and sophisticated. Now, if you step back and ignore the history, it looks like a kid playing with fonts. (Look at it this way: would you ever use that font on a website you were designing?). It says, insists even: We're anchored to the paper age and will never let go. We're the old, dying generation. If you want something new, go elsewhere.
* The discoverability of content: Obviously mimicing a newspaper, but a bad choice for the web. How many links are on that home page (scroll down)? And even more content doesn't even appear there. All that hard work and content, unlikely ever to be found, buried and lost forever. It's tragic. But that's what they did in the hard copy newspaper so I guess it's ok.
* Also, where are stories updated since I visited a couple hours ago? Oh look, if I look at every link a red 'updated' indicator is next to some links (just like the web 20 years ago!), which I see if I examine every one of them (and how do I identify brand new links in this massive page of links?) - but where in this multi-page story are the new parts? I guess I'll just re-read the whole thing.
I say this all out of love. They are an very important institution. The news business is hard enough; stop handicapping yourselves! From the outside they look like they still, in 2015, haven't fully embraced the new technology. What would you say about another business' web team (that was not adapting a newspaper to the web) that produced a site that looked like this? Egads. [1]
EDIT: Some minor edits and additions
[1] I'm not blaming the web developers; I assume they are working within the general constraint of: Make it look like the newspaper.
Unbiased? Some quality reporting to be sure, especially when politics aren't involved, but they jumped the bias shark a long time ago.
If law enforcement needs access to encrypted data, they already have a few different ways. They can subpoena the data and throw the person who controls the key in jail until they release it, or they can just brute-force the encryption in cases of extreme national interest (it's too expensive to do for run-of-the-mill crime, but they have the capability if they really need it).
IMO the entire goal of encryption tech should be to make the government incur significant costs for every invasion of privacy they feel they have to perform. That way, they have the power to invade our privacy (and I don't think we as a populace can really stop them from having that power) but it's so expensive / cumbersome to use they really only use it in extreme cases. I'm fine with privacy being broken by the government on a case-by-case basis; the danger is when the government does a dragnet on everyone.
The FBI just wants to throw you in jail. What do they publish? Lists of people they want to throw in jail. Anything that stands in their way of throwing you in jail is bad, including your encrypted phone.
https://en.wikipedia.org/wiki/Dual_EC_DRBG
RSA makes Dual_EC_DRBG the default CSPRNG in BSAFE. In 2013, Reuters reports this is a result of a secret $10 million deal with NSA.
According to the New York Times story, the NSA spends $250 million per year to insert backdoors in software and hardware as part of the Bullrun program.
Oh, and if terrorists are hell bent on attacking, they will do so with or without encryption. And no amount of data collection is going to stop them if they plan well enough.
... ah nevermind, I'm sure that doesn't mean anything.
Several popular encryption schemes have been developed by or heavily influenced by the NSA (including algorithms mandated by FIPS and other government organizations), and there has been a lot of speculation that they added backdoors to AES and other algorithms.
So in reality they've had the ability to add backdoors all along, and it's in their best interest to keep it a secret whether they've added one, so it makes complete sense that their chief would say this.
Valuations and market corrections aside, this comment is the thing that resonates with me the most. I got into tech and the Internet as a kid in the early 90s because of all the cool, intelligent weirdos thinking about and building the future. I've been traveling out to SF for about a decade now from NYC to do work, and it's been sad to watch that city go from a place I thought could be the only other place besides NYC I could live to a city I try to avoid. It feels like it's getting harder and harder to find those awesome weirdo hackers. The homogeny is brutal in SF.
The one upside is that it's still the Internet and I don't have to be there physically to enjoy the parts of it I like.
[1] as in, not believable
With term sheets the way they are with liquidity preferences if I as a VC invest $100 million in a startup for a 10% stake at a 1X liquidity pref, the valuation I'm placing on that startup is $100 million and that is therefore what its reported valuation should be.
No need for any repricing of these startups - just report their true valuation as largest amount invested with liquidity preferences.
Edit: Just an additional point to add - If I really valued it at $1 billion I wouldn't need the liquidity preference.
It long since stopped being about innovation and disrupting markets and became mostly about shuffling piles of imaginary units around so a select few could get rich. Sadly the pawns in this whole game will be all the employees holding options that are soon to be worthless... and who were convinced to take those options in exchange for taking a proper cash salary.
The only good thing this time around is that (unlike the last big Silicon Valley implosion) most of these companies are still private. So it will be really messy for some private investors and the greater San Francisco area is going to have a mess on its hands, but the broader US economy isn't going to get impacted as much. The stock market of 2015 also isn't propped up by bloated tech stocks in the way it was in 1999-2000.
If nothing else this article does a good job of demonstrating why its important to always check your sources and their biases.
On almost a daily basis I get recruiters contacting me with interview offers from companies who have no chance of surviving past the end of the year. Their messages are often accompanied with bravado about their company's VC backers' other, more successful projects, which only makes me more skeptical.
Doesn't this ratio seem about right for any basket of unprofitable (or even zero-revenue) high-growth companies regardless of valuation? If those 14 winner companies average greater than a 10x return then everything pans out as expected--lots of risky investments together produce a reliable if more modest return on investment.
It seems like the only abnormal aspect is the size of the valuations, but that might be just what happens in a low interest rate environment--too much money chasing too few deals. Whether this affects this success rate of these investments remains to be seen I guess.
You saw this with hedge fund managers and the stock market as well. Lots of hedge fund managers went on and on about how irresponsible Bernanke was because he kept interest rates low which raised asset prices.
To be clear, I don't think this is a nefarious or even conscious process. However, I think if someone really wants a particular scenario it tends to color their thinking.
Also the actual claim made isn't as sensational as the headline. Just says that 90% might take a lower valuation. All that requires is a general market decline.
Anyway, take a look at the list of unicorns.
If the businesses are based on bullshit, then he's probably right. If the valuations truly are wildly out of control (and it does seem like it), then sure, they're due for a correction.
It's also worth keeping mind that if there are 144 of these startups, 90% of them is 129. That doesn't add up to that much money in SV terms. This seems like a tempest in a teapot. People love to make headlines, it seems, with "OMG bubble OMG!"
Wasn't that inevitable though?
There's a lot of money at stake, but the number of affected people is relatively low, isn't it? I understand why HN readers are interested in this, but is it a big story outside of tech circles?
Well, that's kinda how it's supposed to be. That's why expected value is more important. A few $200+ billion Facebooks and Googles can compensate for a lot of smaller $1 billion failures.
I thought this was always the assumption.
Is it because they are already valued at $1B+ that this thesis should change? I don't see why that should be the case...
Market's across the board are doing pretty bad this month - It would be interesting to how bad tech is doing relatively to oil futures and other commodities.
I didn't realize that there are this many unicorns. Or is this a typo?
L19 means "the number of legal positions for a 19x19 board".
cf. L18 which means "the number of legal positions for an 18x18 board".
L19 does NOT mean position L:19 on the Go board. :)
It's easy to recognize that there must be a lot of them, but hundreds of billions is absurdly fast growing. As another data point, the 2x1 board has 8 games.
log_2(L19) = 565 bits
Edit: did the work, it does, but too lazy to describe the group https://gist.github.com/cinquemb/18e494348045725e2b60
We didn't submitted it to HN before because we were waiting to complete the whole thing and create a proper index for all the articles. The part 6 will be published tomorrow and you can expect a few more articles in the next weeks.
I'd like to clarify that this is not a product, it doesn't cover all the awesome features that Trello has. It's just a learning experiment that we're sharing with the rest of the world.
Finally, to give you some background, we're a small Rails dev shop (5 guys) who works remotely. We're now playing with Elixir and Phoenix and having a lot of fun with it. I'd totally recommend any dev to play with Elixir, specially if you come from a Rails background.
Kudos to our colleague @bigardone for putting all of this together.
Live demo: https://phoenix-trello.herokuapp.com/
part 2:https://blog.diacode.com/trello-clone-with-phoenix-and-react...
part 3:https://blog.diacode.com/trello-clone-with-phoenix-and-react...
part 4:https://blog.diacode.com/trello-clone-with-phoenix-and-react...
part 5:https://blog.diacode.com/trello-clone-with-phoenix-and-react...
part 6:coming soon
A tip to everyone checking the codebase (JS parts) to learn about building a Trello-like application, there isn't any optimistic UI updates in this tutorial app. (eg. after dragging a card to another list, displaying the card at the dragged position before receiving confirmation from the server.)
When you add optimistic updates to the mix with real time updates, things get much more complicated. Tracking pending updates, rolling back when something goes wrong, ordering of updates, reconciliation with server when the client is missing some updates etc.
I'm building something similar, and these have been most time consuming parts to build in a reliable way.
It seems obvious that every time you move a card to a column, you recalculate the order of every card, however that has overhead because you have to send the new order of every card to the server.
Is there a more efficient way to solve this problem? It seems to me it would be more efficient to assign a position value for only the card you are moving relative to those before and after it.
I'm sure trello has solved this
Thanks!
Cool concept.
"Under the Harvard-Ravel agreement, Ravel is paying all of the costs of digitizing case law. HLS owns the resulting data, and Ravel has an obligation to offer free public access to all of the digitized case law on its site and to provide non-profit developers with free ongoing API access (Ravel may charge for-profit developers). Ravel will have a temporary exclusive commercial license for a maximum of eight years."
"For the duration of that commercial license, there will be a restriction on bulk download of the case law, with some notable exceptions. Harvard may provide bulk access to members of the Harvard community and to outside research scholars (so long as they accept contractual prohibitions on redistribution)."[2]
[1] https://www.ravellaw.com/plans
[2] http://lj.libraryjournal.com/2015/12/oa/harvard-launches-fre...
So you can't get a feed of cases from pretty much anywhere, and often, you aren't allowed to bulk download, etc.
Plenty of folks have digitized all the data harvard is talking about here. They are not first. Carl malamud, for example, has scanned all the federal reporters and tons and tons of other cases.http://radar.oreilly.com/2007/08/carl-malamud-takes-on-westl...
and
https://bulk.resource.org/courts.gov/
(My experience here is from back in the early 2000's working on getting pacer/states/etc to open up all of this data, so we could get it into google scholar and elsewhere. Often, they were willing to sell it to us, but they would not let us pay them pretty much any amount of money to make it just open and freely available, which is what we really wanted. Things have not gotten better, sadly, and in fact, have gotten worse)
In particular, I'd have expected Delaware to be in the first group, because so many public companies are incorporated there, and so the decisions of its courts on corporate and stockholder issues have major national importance.
Offhand, I can't think of why MA or TX would worked on ahead of DE. Of course it is possible that the volume of material from each state is a factor...it could be that DE is being done in the first group but has a lot of material so won't finish in 2016. I've never taken a look at the volume of each state's output and so have no idea which state courts handle the most cases.
So one truly fascinating aspect of legal practice is that we tend to operate in the gray areas. However, the traditional way of researching case law reviewing a list of cases returned based on your query does little to help you sort through the mess.
With data visualization, you not only see the cases, but you see the relationship between cases, and how the cases work together.Among the most significant benefits, the data visualization elements of Ravel Law will help you narrow your research to the most relevant cases more quickly, while also helping you find those cases and arguments that, for whatever reason, didnt rank in the top of your search.
http://www.thecyberadvocate.com/2015/09/30/data-visualizatio...
The value in this appears to relate concepts from one case to others through the visuals on the graph. The larger the circle, the more important the case will be. Lines connect one circle to another circle and its very easy to see which major cases are connected to other major cases. This is like a citator on steroids in my opinion as one can get to this point with a simple search. That means multiple steps in developing the analysis that finds the value and use of related cases. The snippets help immensely in determining which related cases are of value.
---
To: Erik Eckholm <eckholm@nytimes.com>
From: Aaron Greenspan
Date: October 30, 2015 at 1:31 PM
Subject: Concerns over Ravel/HLS Deal
Mr. Eckholm,
We just briefly spoke on the phone about your article (http://www.nytimes.com/2015/10/29/us/harvard-law-library-sac...). I am a Harvard College 04-05 alum, one of Professor Zittrains former students (I actually had to fight the administration to be permitted entry into his Law School course in 2001), and one of the first people Ravel tried to hire, because I am a programmer and I run a legal database called PlainSite (http://www.plainsite.org), which competes with them and receives about 16,000 unique hits daily worldwide. I was also a CodeX Fellow at Stanford Law School in 2012-2013, which is a program at Stanford that Daniel Lewis and Nik Reed are now also affiliated with. I tell you all of this only to point out that I am generally quite familiar with the principles, technologies and individuals involved here.
Ive now corresponded with Jonathan Zittrain and Adam Ziegler at HLS, the latter by phone earlier today. I have brought to their attention a number of concerns, none of which have been resolved in my mind. They are as follows:
1. Harvard University is a Massachusetts not-for-profit organization. Its investment in Ravel, a for-profit corporation, via its XFund venture capital arm, and its subsequent contract with Ravel to earn "proceeds" (HLSs term) from that relationship, involves profit. The University could in theory lose its tax-exempt status over this deal. This is not the same as the Harvard Management Corporation investing in for-profit corporations to further the Universitys mission by earning capital gains and/or dividendsthis is an exchange of cash for assets that Harvard claims it owns (even though case materials are public domain) and a contractual promise to monetize those assets through a for-profit company on an ongoing basis.
2. Worse yet, the deal involves profit from the withholding of public access to legal data, which is the precise ill that this relationship is nominally supposed to and claims to cure. In reality, it only exacerbates it by legitimizing, with all of Harvards imprimatur, the monopolistic legal information model that has dominated the nations judiciary for the past century and a half.
3. Professor Zittrain wrote an entire book on the dangers of internet lock-in and monopolies, yet his actions here are helping to create exactly the kind of monopoly he has become well known for warning about. According to Adam Zieglers recent post on the HLS Library blog (http://etseq.law.harvard.edu), there are to be "bulk access limitations" and "contractual prohibitions on redistribution." This is inconsistent with precedent concerning openness to court records and First Amendment law. That aside, what will these restrictions look like exactly? We dont know, because
4. ...Adam Ziegler told me that the contract with Ravel is not available for public examination and he did not know when it would be (if ever). He did read me a portion of the contract over the phone, which cited "non-commercial developers," and challenged me to come up with better wording. Thats easy. I dont know what a "non-commercial developer" is, but I do know what a "non-profit organization" is. As an individual, I am a software developer who is the CEO of a for-profit corporation in a joint venture with a 501(c)(3) non-profit organization which together maintain PlainSite. Does that make me a "non-commercial developer?" Although Mr. Ziegler insisted that the contract was not subject to interpretation because it is simply clear enough already, I strongly disagree, as I expect any lawyer would. All contracts are subject to interpretation. The contract needs to be posted.
5. One of Ravels investors is Cooley LLP, a law firm in the Bay Area. Based on what Daniel and Nik have told me in the past, Cooley has early access to Ravels software. Essentially this means that Harvard Law School is giving one particular law firm an advantage, which I imagine must violate a number of its own policies, and seems wrong on the surface.
6. Professor Zittrain claims it would have taken 8 years to raise the money that Ravel is providing for this effort. This is extremely difficult to believe. Although Mr. Ziegler refused to disclose how much money is actually involved, we can safely assume it is in the $5 million range given that Ravel has only raised just under $10 million and has had employees to pay for several years. Recently, a single donor gave Harvard Universitys engineering school $400 million, as your own newspaper reported (http://www.nytimes.com/2015/06/04/education/john-paulson-giv...). Harvard is also in the middle of a $6 billion-and-counting capital campaign, as reported by The Crimson (http://www.thecrimson.com/article/2015/9/18/capital-campaign...). Are we really to believe that the number one law school in the country (by some measures, anyway) could not scrape together the cash to buy its own scanners, or that it does not have scanners already? Are high speed scanners even that expensive? Heres one on eBay for $1,450:
http://www.ebay.com/itm/KODAK-i610-PASS-THROUGH-HIGH-SPEED-D...
7. Mr. Ziegler could not answer my question as to why a consortium of non-profits was not consulted ahead of time. I know many that would have been eager to assist, likely including the Internet Archive in San Francisco, which already has several scanners.
8. Though I do not speak for them, I did notice that Harvard and Ravel seem to have nearly appropriated the name "Free Law Project," which is actually a project and non-profit organization at Berkeley that took over from work at Princeton. See http://www.freelawproject.org and http://www.courtlistener.com.
9. The Harvard Gazette has falsely reported, "The 'Free the Law initiative will provide open, wide-ranging access to American case law for the first time in U.S. history." (See http://news.harvard.edu/gazette/story/2015/10/free-the-law-w...) I have been in regular contact with Jonathan Zittrain, Harry Lewis (an XFund Advisor who was Dean during my freshman year) and others at HLS about PlainSite since I brought the idea to them in 2011 almost immediately as soon as I started working on it. Additionally, CourtListener (from the group at Berkeley) has also been in operation for years, offering open, wide-ranging access to American case law. Theres also Google Scholar, which is free and certainly more wide-ranging than Ravel.
10. Ravel is, to the best of my knowledge, unprofitable. It remains unclear why Harvard would place its bets on an unprofitable startup, rather than solicit donations for a projectas it is so adept at doingin order to ensure maximum sustainability.
Mr. Ziegler attempted to dismiss the above concerns on the grounds that we still both agree in the greater goal of open access to law. I certainly have done all that I can to promote open access to legal information, including developing prototypes for digital legal data standards and suing the courts themselves (http://www.plainsite.org/dockets/29himg3wm/california-northe...). But if we both agree on this greater goal, then why has HLS been almost completely unresponsive to requests for cooperative assistance for the past four years, while this deal was being negotiated in secret?
To be clear, Harvard is not the only institution that has made highly questionable and insincere claims about its legal transparency efforts. Stanford CodeX claims to support open access to the law, yet it is now directly sponsored by Thomson Reuters, the parent company of West Publishing, and its "innovation contests" involve pledges not to redistribute case materials. But I would expect the Times to be able to distinguish between academic puffery and genuine efforts to improve the state of our incredibly broken legal system.
Aaron
PlainSite | http://www.plainsite.org
https://news.ycombinator.com/item?id=7026960 discussion)
https://law.resource.org/pub/us/case/ (free mirror, looks like)
It's a great concept, and more/newer is better, but it seems odd for Harvard to act like they're the first to pull it off.
I went on to build Precursor (https://precursorapp.com), which uses Datascript (https://github.com/tonsky/datascript) to accomplish a lot of the things that Om Next promises. If you haven't tried Datascript, you should really take a look! It does require a bit of work to make datascript transactions trigger re-renders, but the benefits are huge. It's like the difference between using a database and manually managing files on disk for backend code.
My understanding is that Om Next will integrate nicely with Datascript, so you can keep using it once you upgrade.
If you're interested in learning more about building UIs with Datascript, I'm giving a talk on Monday at the datomic/datascript meetup: http://www.meetup.com/sf-datomic-datascript/. I'll be going over Dato (https://github.com/datodev/dato), a framework for building apps like Circle and Precursor.
Relay and Falcor are great, but when I look at their docs it's unclear how to integrate with whatever backend I want (especially Relay). Looking at Om Next, it was totally clear how to write my own backend. The tradeoff is that everything is a little more manual, but that control gives you a ton of flexibility.
In a small amount of code, I have a client that can query financial data in a bunch of different ways, and if the data isn't available it sends the query to the backend, which executes it against a SQLite database and returns it to the client. The components are all unaware of this: they are just running queries against data and everything just works.
Combine this is with first-class REPL and hot reloading support via Figwheel (both frontend and backend) and I'm blown away at how fast I'm going to develop this app.
ClojureScript is shaping up to be a fantastic way to program browser-based applications. This is what I use:
* Reagent -- another ClojureScript React wrapper
* Datascript -- An in-memory database with datalog query lang, this is used as the central store for all application data.
* Posh -- Datascript transaction watcher that updates Reagent components when their queries' return data changes
* core.async -- used for handling any kind of event dispatch and subscription. I do a unidirectional data flow type thing and it only took like 15 lines of ClojureScript.
This is one of the nicest front end development experiences I've had. Just the composition of these four libraries gives you a ton of flexibility and a good way to structure your application. You can use this setup to write a real-time syncing/fetching system with a backend database pretty easily.
We ran into the same problems with Om as the CircleCI guys, specifically:1) our front-end data model wasn't complex enough to merit a heavy-weight data access system that required a huge amount of extra digging to get right. We spent far too much time arguing about how to structure app-data, and it only got worse as the app got more complex. The cursor system in its first iteration was just too cumbersome (for exactly the reasons this author states). We kept trying to restructure the data model in order to get it to do what we needed. To be fair though, this is well known, and David Nolen has done a lot to alleviate this in recently releases (ironically by making it more Reagent-like).2) our app is end-to-end encrypted and requires pulling down potentially hundreds of blobs, decrypting them, and inserting them into the dom. Under these conditions, Om would kick it and the UI would grind to a halt.
We switched to Reagent, and found that it was far faster and "got out of the way" of development. Add-watch is amazing too. Our app is quite large (front end SLOC is around ~50k lines), and Reagent has scaled beautifully and is a beast at large-scale insertions (on the order of 1000).
Om has some delightful features (undo ability is very powerful, routes coupled with Secretary is also great for Om), and David Nolen is a genius, but I think even the author has to acknowledge that the app-data/cursor construct is more of a pain than it's worth...
This is huge. I think it might even be the single largest problem to most projects' progress. I've seen a lot of projects that have tried to force non-tree data into tree-structures, and it never works out well. Projects grind to a halt after 6 months to a year because nobody can keep track of the dance steps they have to do with the tree-oriented code to manage their graph-oriented data.
Real, actual tree structures are just incredibly rare. Even some things that "obviously" seem like they should be modeled as a tree are far better off as a directed graph. Like databases of family trees - it's possible someone is literally married to their sister! Less cringe-worthy examples involving large families living near other large families with generational overlaps causing the children of one group marring the grand-children of the other, and vice versa.
You don't really need React. If you can do the ostensibly hard work of figuring out the DOM edits yourself, your app will actually be faster than if you're using React, i.e. React has its own overhead. As long as the data relationship was right, I've never found it difficult to manage state thereafter. It's when the shoe doesn't fit that things become a problem.
The problem is, we have a systemic problem of treating front-end devs as not "real" developers, not capable of forging their own paths. It's not just from the outside-in, I see a lot of front-end devs lacking a lot of confidence in their own skills. As a culture, we yell at any JavaScript programmer going his or her own way, building their own thing. "Don't reinvent the wheel!" they are told. Screw that. I can think of at least 3 times off the top of my head that the wheel itself was significantly and usefully re-invented in the 20th century alone. The problem is not "reinventing wheels". The problem is this institutional fear of making ones own decisions, leading people to think they need to learn everything.
react-cursor gives this pattern in javascript, immutability and all, but with regular old javascript objects. It also comes with all the same caveats as in this article. (I don't speak for the creator of Om, I speak for myself as the author of this library which was inspired by Om and Clojure)
https://github.com/dustingetz/react-cursor/
The beautiy of the state-at-root pattern with cursors, is that each little component, each subtree of the view, can stand alone as its own little stateful app, and they naturally nest/compose recursively into larger apps. This fiddle is meant to demonstrate this: https://jsfiddle.net/dustingetz/n9kfc17x/
> The tree is really a graph.
Solving this impedance mismatch is the main UI research problem of 2015/2016. Om Next, GraphQL, Falcor etc. It's still a research problem, IMO. The solution will also solve the object/relational impedance mismatch, i think, which is a highly related problem, maybe the same problem.
I'm currently knee deep in a react/redux implementation, which I guess is quite similar.
But I think it's approaching a consensus already within the CLJS community that, on API alone, reagent is the React interface you want.
It's extremely elegant and performant; probably the best frontend library I've ever used in close to a decade of web development.
Out of curiosity I tried swapping "<ul><li>Artichokes</li><li>Broccoli</li><li>Cabbage</li><li>Dill</li><li>Eggplant</li></ul>"
and the same without the broccoli and dill, back and forth a few thousand times using jquery.
The average time per change was 28 nanoseconds or 35000 changes per second (Chrome, MacBook Air). Trying swapping a list of 300 fruits for a list of 500 fruits was 1.4 milliseconds per change.
I wonder if using some convoluted framework to "solveor at least mitigate" this might be premature optimisation? (As well as actually slower).
I dont think the last part is true. Browsers dont repaint (nor they reflow) the page until its really needed. So if you have a loop that modifies the DOM multiple times, but does not read from the DOM, there performance hit described by the author should not occur.
- Redux as your single state tree/graph
- Normalizr to denormalize data and store it in a graph, including pulling nested resources out as records
- Reselect to query on your denormalized data
And the best thing is this is production ready, in JS, today.
How does that work if multiple users are collaborating on the same state simultaneously?
When a basically crud website tells me my perfectly fine browser isn't supported I say: FAIL.
They are also on their way to be added natively to the Firefox root store: https://bugzilla.mozilla.org/show_bug.cgi?id=1172401
[1] https://aws.amazon.com/blogs/aws/new-aws-certificate-manager...
The mental calculation just doesn't work out for most things. My personal rule of thumb is:
benefitofmeaccessingXremotely(X) - costofotherpeopleaccesingXremotely(X)*riskofthathappening
Benefits being low for most things, costs high, and risks...uhmm...nah. Only exception I can think of is very limited amounts of sensors (eg. is X on?).What's the benefit of me turning on a gas stove remotely? Almost none. What's the cost of someone else turning on my gas stove? Really high. How much is the risk? Way too high.
Then there's smart devices, another component of IoT. But how much smarts do we actually want? Screens are nice. Making my shower multi touch isn't (capacitive touch + water = no bueno. Imagine water from hell scenario and no way of turning it off with your wet hands). Fridge compiling shopping lists automatically? Neat. Cheap android tablet that comes with a fridge glued to it? Nah.
The only utility I see is locally connected devices. Using your phone as a remote. That seems handy. To a certain degree, we have that. Extra points if I don't need to download an app for everything, because don't you dare tell me that your blue-tooth on/off switch needs a 15mb .apk. If I gave one about the 14.9mb of branding you're including, I'd download your press kit.
There's some utility in home IoT wudget-thingimabobs, but I'm almost certain we'll mess it up to no end in our excitement. There'll be some legitimately useful products coming from it, but most of it will be utterly cringe worthy in retrospect.
/rant('IoT')
Stoplights?HVAC Systems?Carwashes?Ice Rinks?POWER PLANTS?
Yes!
https://www.youtube.com/watch?v=5cWck_xcH64
EDIT: I looked at his more recent talk from last November ... the situation has not improved
"115 batshit stupid things you can put on the internet in as fast as I can go by Dan Tentler"
https://www.youtube.com/watch?v=hMtu7vV_HmY
Featuring Spanish Chicken Controls
[0] http://www.jwcn.eurasipjournals.com/content/pdf/1687-1499-20...[1] https://www.cs.berkeley.edu/~daw/papers/15.4-wise04.pdf
Since the web is now getting "engaged" to the devices with CoAP and other protocols I wanted to create awareness of how bugs can spill over into the real world and do real damage there. If hacked insulin pumps or baby monitors don't scare you enough how about hacking a train? https://media.ccc.de/v/32c3-7490-the_great_train_cyber_robbe... ?? (everyone should probably watch this simply because SCADA strangelove guys are crazy and awesome)
Anyway to counteract the usually very "marketing intensive" tone of IoT groups on LinkedIn I decided to start this IoT Security group: https://www.linkedin.com/groups/4807429 it would be great to see people from all camps (IoT is a combination of 3 silos: 1) embedded, 2) web 3) infosec) actively contributing with technical topics in this group. I will keep it open to posts from marketeers but am heavily policing it for blogspam and remove any posts that are not security related).
Also I have some ideas about hackerspaces (http://hackerspaces.org/) which IMO every city should have and support. They're needed to propagate knowledge between these individual camps properly. (my contact details are in my profile in case you are interested to discuss more offline).
"Peiter Mudge Zatko is a member of the high-profile L0pht hacker group who testified before Congress in 1998, and since he's gone on to head cybersecurity research at the Defense Advanced Research Projects Agency (DARPA) before joining Google in 2013. In June, Zatko announced he was leaving the search giant to form a cybersecurity NGO modelled on Underwriters Laboratories."
and above that, a section about a similar "consumer reports" style rating organization. that was also the first time i'd heard of the group i am the cavalry, which seems like a cool idea (in principle, at least, without really knowing much about the actual group).
and i understand this objection to that sort of approach:
"Its not the same quality problem... UL is about accidental failures in electronics. CyberUL would be about intentional attacks against software. These are unrelated issues. Stopping accidental failures is a solved problem in many fields. Stopping attacks is something nobody has solved in any field. In other words, the UL model of accidents is totally unrelated to the cyber problem of attacks."
it is a very different problem in a lot of ways, but that doesn't mean that an approach similar in spirit or presentation is doomed to failure. and i think it does fit into the broad category of messy consumer information problems that are hard to solve with specific detailed regulation.
2025: Every object in your home has a IP address & the password is Admin
It's amazing that Google search, which is quite useful, has negative market value as content. In the cable TV world, there are channels cable systems pay to carry, such as ESPN, and channels that pay to be carried, such as the Jewelry Channel. How did Google end up in the latter category?
http://www.emarketer.com/Article/Mobile-Account-More-than-Ha...
A lot of this is unncessary, I could just be using css. I like that there's not all this asset-flow magic built out, just simple npm with bash cli. Unix philosophy and very little heavy lifting. I think there's still hope.
Now if we can just teach casual users git...
[0]https://gohugo.io/[1]http://www.blevesearch.com/news/Site-Search/[2]https://tlvince.com/static-commenting
Right now I've got my own self hosted platform, running Wordpress on a Digital Ocean droplet. The constant security updates for Wordpress are a nightmare, and it seems I have to hack both my theme and my post code every time I want to make a slightly interactive post. Never mind that there doesn't seem to be a decent way to preview posts on mobile.
As others have mentioned, it seems the best way to get more people in control of their own platforms would be with easier static tools.
On that note, I've been really impressed with org-mode and pandoc. I've been writing and generating code within a text based environment lately, but it still feels as though the process hasn't really budged or improved much at all in the past 15 years. With org-mode and pandoc, along with babel, I can write and test code, embed images, and generate decent html/pdf all in one go.
But for the casual user, I think it's become more difficult to self publish over the years, not less. The tools we've built have gotten pretty embarrassing if our goal is to get as diverse of a population as possible speaking and sharing their ideas openly on the web.
Cheers to everyone still working on tools like org-mode, pandoc, and latex. It's still relevant, and it still does a great job. If you haven't checked them out, take a look. I was certainly surprised by how far these projects have been taken.
Or maybe it's just me?
From the article:
"There was a promising short lived moment where smaller, topic-oriented blog networks like Svbtle (amongst others) started appearing, but even those seem to have gone by the wayside and are increasingly being replaced by Medium."
Back in 2002 I co-founded a blogging company. At that time we were competing with the likes of Blogger.com and Typepad.com. There were many other companies, at that time, which I've since forgotten. At one point, around 2003 or 2004, we created a list of all our competitors, and there were at least 100 names on the list.
My point is, the vast bulk of all blogging has always been on 3rd party hosted blogging sites. Self-hosted blogging has always been rare. I self-host my blog, smashcompany.com, on a server at Rackspace, but this has always been a rare option.
All the same, I am intrigued by the question. If anyone has historical data on this, it would be fascinating to know when self-hosted blogs hit their peak. If Technorati.com has survived in its original form, then it would be in possession of this historical data, but sadly, the original Technorati.com is dead.
Jekyll and Github Pages keeps the deployment simple and Google Domains has proven to be simple, cheap, and reliable. I tried Kloudsec out last week on a whim after seeing it on HN, and so far it's great - simple, free SSL with let's encrypt.
https://evancordell.com if interested. It needs a little more love before I'd really say I'm pleased with it, but I'm very happy with how cheap and easy it was to set up a personal blog with SSL.
I think many people feel like they get out what they needed to get out on Twitter/Facebook. They used to write on their own blogs to get things out, now it's elsewhere.
I think a lot of folks are still running WordPress of some kind on their own Dreamhost, etc, accounts which feels like self-hosting to me.
The key questions of the debate on the cons side of switching, assuming you're blogging for fun and not thinking particularly about advertising or massively customised SEO strategies, seem to be:
1. do I own my content2. will my content be accessible forever
As this post highlights, the answer to (1) on medium is YES. So, no problems.
The answer to (2) is also, for all practical purposes, YES, but you shouldn't depend on it.
But is this really such an issue anyway? I certainly assume that the vast majority back up their photographs, just by nature, and how difficult is it to back up the plaintext of your blog pieces too? If you have backups, and the answer to (1) is yes, then really, it starts to look like an easy decision.
I recently switched to Medium and couldn't be happier. With Ghost I was spending more time tweaking and maintaining purchased themes than I spent writing.
It's really really really fucking hard to run a blog that works well on desktop/tablet/phone and doesn't crash if you get a traffic spike. How many self hosted blogs can handle 500,000 hits in less than a day? Not many.
Medium will probably die someday. That's fine. I own my content and my content URLs. I'll simply port it to a new platform. It wouldn't be the first time.
I've written about it here https://hugotunius.se/aws/cloudflare/web/2016/01/10/the-one-...
There's already an infinite quantity of interesting content to read, and it seems reasonable to expect rising quantities of worthwhile, as I find writing and creations that I was unaware of when they were being made. With all this stuff, I want to be able to control where and when I read, and how I filter, manage, follow, and store all this stuff. At some point, platform operations reflect a business plan, and that plan may or may not allow for one or more of my preferences, for reasons of $. I guess I just prefer a relationship where a standard or pseudo-standard allows the user control, to select differing vendor options at the very least.
Then again, as I'm barely capable of managing a basic server install, I'm fully aware of why people throw in with hosted systems. I'm hoping for great things from stuff like Sandstorm.
What I've discovered is that having a VPS opens up a world of opportunities for network related things. I've used that site to host Ludum Dare entries, ClickOnce .NET apps, and a Wiki Profile image that I used to see if anyone was looking at my page on a company wiki. An SSH tunnel has allowed me to bypass some firewalls that block the majority of ports.... I've learned a lot on that server. Some of the best $50/year that I spend in terms of hosting stuff.
Turns out AOL had the right idea the whole time -- people want platform-specific keywords and they want to trust the platform's caretakers to decide what's OK for them to see.
You can point to lots of web sites that are hard to read, but that just proves the point that people are rather finicky about it these days.
Right now we are working on an IDE inside SunSed so anyone can create their own template with HTML++ (our own templating/programming language).
Here is a screenshot of of our IDE (I'm working on it right now):
https://cdn.sunsed.com/0/images/shared-on-web/htmlpp-ide-pre...
We are going to announce HTML++ and SunSed 2.0 on HN in the next few months.
Happy hacking & blogging!
I've done some exploration of just where intelligent conversation online lies, and frankly was surprised at the results:https://www.reddit.com/r/dredmorbius/comments/3hp41w/trackin...
The methodology uses the Foreign Policy Top 100 Global Thinkers list as a proxy for "intelligent discussion", the string "this" to detect English-language content, generally, and the arbitrarily selected string "Kim Kardashian" as a stand-in for antiintellectual content. Google search results counts on site-restricted queries are used to return the amount of matching content per site, with some bash and awk glue to string it all together and parse results.
As expected, Facebook is huge, as is Twitter. When looking at the FP/1000 ratio (hits per 1,000 pages) KK/1000, and FP:KK ratios, more interesting patterns emerge.
Facebook beats G+, largely.
Reddit makes up in quality what it lacks in size, but Metafilter blows it out of the water. Perhaps a sensible user filter helps a lot.
The real shocker though was how much content was on blogging engines, even with a very partial search -- mostly Wordpress and a few other major blogging engine sites. Quite simply, blogs favour long-form content, some of it exceptionally good.
But blogs suck for exposure and engagement.
This screams "Opportunity!!" to me. I've approached several players (G+/Google, Ello) with suggestions they look into this. Ello's @budnitz seems to be thinking along these lines (I'm a fan of what Ello's doing, but its size is minuscule, and mobile platform usability is abysmal.)
One of the most crucial success elements for G+ is the default "subscribe to all subsequent activity on this post" aspect. Well, that and the ability to block fuckwits (though quite honestly ignore would be more than sufficient). There's a hell of a lot else to dislike, but those two elements are crucial to engagement.
As for blogging, I'm a fan of a minimal design (http://codepen.io/dredmorbius/pen/KpMqqB) and static site generators.
Half the time I see Medium posts, the other half I'll see something hosted with Jekyll + github pages. Which technically isn't Self Hosted, but still quite different from just writing in Medium or something of the sort.
However I suspect HackerNews readers are not the average, and I do think there's a down trend on self hosting blogs, versus using Medium/Wordpress/Tumblr or even Blogspot.
For example, in the video I just watched he said "the natural way to compute the distance between two vectors is using cross entropy." And then he goes on to describe some unnatural features of cross entropy. The truly "natural" way to compute distances between vectors is the Euclidean distance, or at least any measure that has the properties of a metric.
I can understand this is a crash course and there isn't time to cover nuances, but I'd much rather the instructor say things like "one common/popular way to do X is..." rather than making blanket and misleading statements. Or else how can I trust his claims about deep learning?
https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearni...
E.g. in the first question with code I had to reverse-engineer what they mean (including passing values in a format, which I consider non-standard (transpose!)). The first open-ended questions were entirely "ahh, you meant this aspect of the question".
Otherwise, the course (the general level, pace, overview) seems nice.
EDIT:
The IPython Notebook tasks (i.e. the core exercises) are nice.
It comes with video, notes, all the math, cool ipython notebooks and will let you implement a deepish network from scratch. That includes doing backprop through the svn, softmax, max-pool, conv and ReLU layers.
After that you should be more than capable to build a 'real' net using your favourite lib (Tensorflow, theano etc).
[1]: http://cs231n.stanford.edu/
In any case, I regret waiting so long for learning deep learning. (I thought that I needed to have many years of CUDA/C++ knowledge (I have none); but in fact, what I need to to know the chain rule, convolutions etc - things I've learnt long time ago.)
And if not natively (using docker/VMs), would they be able to use NVidia CUDA card on my system? And how much disk space would be needed.
Thanks.