Companiesâ€"especially small onesâ€"are defined by their culture, and I really think culture is best developed and maintained in person. We recently had three of our team members move away for various reasons, and they're now working remotely. It has been a shake-up. I won't say it's a bad thing, because I truly want them to be happy, and I'm truly willing help them make it work, but it has been a surprising culture shift for our entire company. At this point, I think we'll make it work, but the day-to-day work experience for all our employees has changed dramatically and that's not something to take lightly.
Find a place you truly want to live (which definitely doesn't have to be in the Bay Area) and find a company that you want to work for locally. Go into the office every day. Talk with people about more than work. Connect and develop relationships. Work toward a true culture that exemplifies what the company stands for both internally and externally, and make it meaningful to everyone involved.
That's what makes me happy, and that's what I'm optimizing for. Am I in the absolute number one place that I want to be in, period? Maybe not. If I had my say I'd be living and working on the east side of the Sierra Nevada within 1 hour each of Mammoth mountain and the Yosemite highlandsâ€"and that may be my eventual destination.
But right now, location is far less important to me than the people I spend each day with, the people with whom I work, and the company culture that I'm helping to generate and preserve. That's what moves me forward each day, and I truly believe that will make my company more successful and sustainable.
I understand you though. I went through a time in my life where I was more attached to places than people. Turns out I was in the right place all along, but I just hadn't run into the right people. That changed for me, and now I truly believe that location is a small price to pay. It's complicatedâ€"it is of course better to have a great employee working remotely than a poor one in the office, but I think it's even betterâ€"perhaps exponentially so and especially to a startupâ€"to have that great employee in the same room.
*Edit: I'd like to add, that part of this is the "who moved my cheese" problem, of going from a 100% local company to a significantly dispersed company. We are adapting as a whole and each week we improve our process and culture. The challenge has become "how do we maintain a culture and coherence remotely?" I think in time we will be successful at that, and continue to be a strong group, but it's still a challenge, and one that you'll have to weigh against other challenges if you so choose.
From another comment:As of 2006-2010, median price of a house in Louisville is $48,300. The median sales price for homes in San Francisco CA for Sep 12 to Nov 12 was $750,000.</comment>
That is the free market at work. The population has deemed they would rather pay 15 times more to live in San Francisco than KY. There is a reason that the homes are that price there- There are more sellers than buyers and a plethora of supply, so the price is driven down.
On the most inaccurate and basic of breakdowns, 15 out of 16 people would rather not live in KY.
On the other hand, it's very easy to quantify the benefit of remote workers. You increase your potential labor force if you remove geographic restrictions, which cuts costs and improves productivity. I personally was able to quantify the benefit of working remotely in terms of distractions. I work remotely on a medium sized team and I occasionally travel to the headquarters to work on-site. My productivity always drops when I'm on-site because of the constant interruptions and meetings, both initiated by others and myself.
It's not that your city isn't fun and exciting. It's that your office is in a building in that city, whereas my office is anyplace I feel like being at the moment.
Now I might feel like being in your city for a while. Possibly even in your cool office. But for half the year I'll probably be someplace completely different. Because I can.
The author hit the nail on the head when he explained why this gig is so great: we can do it from anywhere.
The good companies have figured this out and are encouraging their people to do just that. Since that's now a viable option, it's tough to understand why people are still working for companies that don't give that option.
Essentially the efficient frontier is a finance concept that says that combinations of assets can be graphed and form a line called "The Efficient Frontier" where only portfolios of assets on that line should be considered.
Sorry for the link to wiki, but this is a really short article. http://en.wikipedia.org/wiki/Efficient_frontier
When I consider where I live, I want to optimize to make sure I am on that frontier. Instead of risk and return on the axises, I think of a multivariant optimization, but essentially what I am saying is that many cities do not make it on the efficient frontier when looking logically.
For example, is there anyway that Louisville has as rich of a history as NYC, or DC, Boston or even SF?
Does Louisville have better night life than any other big city (DC, NYC, SF, Boston?)
Does Louisville have better skyline than any other big city (DC, NYC, SF, Boston?)
Does Louisville have better live performances than any other big city (DC, NYC, SF, Boston?)
Does Louisville have better museums than any other big city (DC, NYC, SF, Boston?)
Does Louisville have a better hipster scene than any other big city (DC, NYC, SF, Boston?). Its probably better than Boston's, but I don't care whether hipsters are part of the culture or not.
And the OP's biggest point, that he likes to drive to rural areas in 15 minutes. Its more like 20 minutes from SF, but some of the best mountain biking, trails etc. is right there. Boston has the same thing 20 minutes away. IMO DC and NYC are harder to get to rurual areas.
Liking Louisville is completely understandable if you just like being familar and don't want to move and have to make new friends etc. but it should be 100% understandable why a recruiter cannot imagine someone wanting to stay when viewing the opportunity as an outsider.
[ADDED] I reread what I wrote and it seems like I'm bashing Louisville, more my intention was to put out the efficient frontier concept for selecting a location.
[To unalone and the OP] Sorry for coming off as pompous. It does read a little that way, but I used the OP's criteria, not my own. The OP could have made a much better arugument by specifying what he likes about the criteria, but he didn't do that so I just asked the questions rather than making an assertion about them. Notice that I didn't specify whether DC does have better nightlife than Louisville? I instead just asked the question which the reader can answer on their own.
There isn't a right and wrong answer here, IMO. In my experience, working in the same location and working in different locations are very different working experiences. For some companies and employees, one will work. For others, it will not. I will be very hesitant to ever enter into a remote working situation again- I did not like it at all. But that's just me.
All that is required is for you to make sure you work for a company that matches you- don't get angry if a company/prospective employee doesn't match what you want. That's where the article's complaints about recruiters ring true- they don't know/care. But they don't know/care about anything other than the buzzwords on your resume, so this shouldn't be anything new.
(Ironically, just this morning I got a LinkedIn spam message from "CultureFit Staffing". Anything but, folks...)
Not wanting to move for a job is the default for 99% of the world.
"Don't take (or keep) a job because you like the people. If you're a decent person, you'll find people you like (and who like you) at any job you take."
This is patently NOT true. The people you work with, in my experience, matter far more than any other factor.
What you're paying when you suffer Manhattan or Bay Area rent is the career benefit (?) that it confers to live in such a place. You may be overpaying; you probably are. I don't think anyone has good data on this, which is why the extortionist mega-landlords who set prices (by limiting supply through NIMBY regulations) can get away with so much. No one has a good handle on what it's actually worth to live and work in a star city. I think a lot of people pile into star cities because they're driven by FUD and FOMO (Fear Of Missing Out).
I don't know what "the right answer" is but I can see the appeal of living in these high-rent areas. It really sucks, though, because we're in an uncanny valley where people are just mobile enough to stratify by ambition (with a lot of noise in the mix; I am not saying that people who don't live in expensive places aren't ambitious, but the correlation exists) in their 20s, but not enough to render location obsolete.
I've been living in a small coastal town now for 4 years, not close to much of anything related to my field, yet I am working and happier than I could ever be in some metropolis.
Employers will avoid remote workers at their own loss.
I've turned down a couple of good opportunities because I didn't want to relocate. Of all of the reasons to relocate to a new city, I think doing so for work is possibly the work reason. It's too easy to fall into a trap where work becomes your life.
The compromise that I've made is that in spending a few days in Mountain View every 6 weeks or so. It's not terribly inconvenient for me, allows me to pad my frequent flyer miles, and I generally enjoy California. I think that the cost of living between California and Maryland are a lot closer than Louisville would be, so I've always got my eye open to possibly relocating somewhere even cheaper than here -- my home town is Memphis, TN, which is damn near free to live in comparatively, but I really like Annapolis, its proximity to DC and Baltimore, and the knowledge that almost everything is within a couple of hours.
The biggest trouble I have is that I really like the bigger cities. I love the time I spend in and around San Francisco, and on occasion I'll spend time in NY, which I also enjoy. I can't ever tell though if it's just because I'm effectively a tourist, or how much I would enjoy it as a permanent residence. Ultimately, I think I'm plenty happy anywhere with a temperate climate and the ability to work from home, so I'm occasionally torn on job offers I receive to work in sexier locales. Grats to Ernie for having found his ideal place. The spot I'd move to to maximize dollar value (Memphis) is too hot to be perfectly happy, and all the places I've found with better climates tend to be more expensive -- so perhaps I'm still searching for my idyllic setting, or perhaps it's just a matter of the grass being greener.
I would say startups are probably most likely to be able to take advantage of this.
BTW, this post makes me want to move to Louisville and join Ernie :) (not really, Chicago is really nice).
Optimizing for happiness, put in the context of actual real-world happiness, is a strong point. I'll keep praying about it...
If you had gone in with the mind set, "I want to work locally, and relocating isn't something that works for me." I can respect that, but when you say I wont locate for YOU, comes off as saying, hey I make the decisions not you. Or an attention grabbing title for a post.
For a full time employee, remote work is like a long distance relationship, more often than not, they just do not work. Heck contract remote work is already difficult as is.
Concur with author 100%. There are lots of nice cities in the US that are way cheaper and more livable than the big 2, and moving from one of them to effectively make less, commute more, and have less personal time, even after taking a hit on the cost of living adjustment is pretty questionable.
If I were to move to SF I don't see how I could afford a place that was both close to work and had a garage where I could tinker unless I felt like commuting 2 hours each way. But I'm also spoiled by a real estate market where you can a decent house for under 200k, sometimes close to 100k, where I'm living now.
Also, I didn't see anything in the post about the great local coffeeshop scene in Louisville :)
This is false. You only need to make twice as much as you are spending. I don't think anyone making something like $100k in Louisville would be spending all their money and then would need to make $220k in New York.
In that sense, money is time... and time can be represented as money.
That being said, it was really smooth on my iTouch 5G.
The demo HTML5 app also runs in Safari. The problem is that Safari's Nitro JS engine is much faster than the older JS engine used in a UIWebView (I think the restriction is supposedly due to security).
For the moment, native is still the way to go, IMO. But I've always believed that at a certain point iPhones/Androids and their browsers will be powerful and efficient enough to negate any performance drawbacks to HTML 5. I think that point might arrive by the iPhone 6 or so, when developers might feel comfortable dropping support for the iPhone 3GS and equivalent. In fact, many devs are already dropping support for the iPhone 3GS.
I do hope this app silences the HTML5 critics out there who say you can't get a good UX with HTML5, though. The proof is in the pudding.
Facebook and Google tried and miserably failed, LinkedIn and Twitter are probably the ones closest to actually doing it, but users still prefer the native version, why?
One I think is a big reason is clearly visible in the video: scrolling physics are different. It's annoying and feels wrong and I think people notice even if they probably don't know how to describe it.
Maybe with this Sencha contest we'll see some good ones.
Also, if you are doing a bunch of optimizations to make it fast on HTML5, then you're basically doing the same work as writing a native app, so what are you gaining by doing a HTML5?
Seriously, at some point you can get a better user experience, have better access to native API's and can really leverage the full abilities of the device by going native. So, you know, go native.
Disclaimer: I have wrote HTML5 mobile apps and I think it's a great developer experience, but it feels worse as a user.
A more apples-to-apples comparison would be to put it in a iOS app running UIWebView and see if the performance is maintained. I'd certainly be interested in the results.
Do the new native FB apps only run on iOS5+ and Android 4.1+? HTML5 may be ready on those platforms, but Android still has a lot of 2.x phones floating around. How do the HTML 5 benchmarks for iOS 5 run on the original iPhone 4?
Truth is - HTML5 apps can be made to perform well but there are limits. Battery life suffers and you may find in future that the cool animation/whatever you wanted to add in is too laggy (especially on older devices).
That made a lot of things you take for granted today possible. And it made total sense, here was someone spending their time writing code to do something that you wouldn't have to spend your time doing. You could write it yourself or you could leverage their effort by buying a copy of their stuff. That infused money into the field which gave people the freedom to invest their time into building amazing things.
The idea of a young person beginning to code and polishing their application and uploading and getting some money in donations. I'm more likely to donate to a great attempt at a project by a young person than I am to buy software on Linux. Raspberry Pi Store could have been the communities push to loads of great community driven open source projects.
As far as I know they don't want to sell direct through their only store..
-- EDIT --
Looks like I was confused with their other store. This indeed just looks like an App Store :)
"NIST key management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to 256-bit symmetric keys."
Plus every time anyone brings up the subject the answer is always the same: "If you roll your own crypto you're an idiot." Which is not very encouraging. Likely why crypto' libraries are so antiquated and terrible.
(1) We need to build computers out of something other than matter
We can disregard (1) as it will probably require changing (almost) all encryption algorithms that we currently use. (2) will most likely break all key-lengths so any use of AES will be weakened.
Usually when a cipher is broken, they don't break it fully, but rather reduce the keyspace. By making your starting keyspace even larger you make even a broken cypher secure (up to a point).
How does that work? It seems odd that is is independent of the size of the system.
This is due to the probability of key collision. See http://en.wikipedia.org/wiki/Birthday_attack
Not that I'm saying 2^128 calculations isn't a lot, but compared to 2^256, it's a tiny speck.
You can be sure taxdollars are powering mind-staggering supercomputers that the public doesn't know about right now.
http://www.cse.chalmers.se/research/group/logic/TypesSS05/Ex...(also covers typed lambda calculus AFAIR)
One of the dominant modeling paradigms in formal semantics (e.g. understanding the meaning of human language) is built around a typed lambda calculus called Montague Semantics (see http://plato.stanford.edu/entries/montague-semantics/ )
What would an existing proxy server (i.e. doesn't understand HMURR) do with a connection that looked like this?
It seems like the main reason to prefer this proposal is if you do not want to write another parser for HTTP/2.0 and think that new features afforded by a different wire level representation are not worthwhile.
Another thing to note is that this proposal effectively adds on multiplexing without prioritization. Prioritization is fairly important, otherwise the client application has to do application layer throttling in order to reduce contention. Adding prioritization would help obviate the need to make the link utilization vs contention tradeoff.
I understand that no one wants to define a new format, but I don't see how it is more difficult than defining a new protocol.
Perhaps the protocols they're developing will have use for ground level civilian communications, but it's unlikely that satellite based internet will ever see widespread adoption in the west because of latency.
Probably not, there's far less spectrum reserved for civilian uses than for military per capita.
first of all there's probably more than darpa working on it. second of all darpa is not working on it(yet)? they want people to come in and help design it for them. hence the link.
but as always, please correct me if i'm wrong
I've always thought that bookmarks are a great idea. It's the interface in the browser that sucks. I think people use tabs a substitute for bookmarks, because the interface lacks. Sometimes it's nice to have a visual prompt.
On the other hand what's the point of wasting browser chrome with a bookmark that you only click on once in a blue moon? You'd be better off using that space to order most visited sites, or list your history, or a filtered history.
The one time the browser bookmark bar was invaluable for me was when the my keyboard broke on my computer. I could actually get quite far without it and with the bookmarks.
There's another issue here of info-hoarding. I met someone recently that saved and tagged a reference to pretty much everything he read. He seemed to think it was useful for him. Do we need to keep emails etc? Do we keep this stuff because it's of low cost. Or it costs more effort to get rid of it?
Freesearching requires reliance on your memory. Sometimes we need prompting or signposting. Search directories versus search engines.
If I read something on the web, like a tutorial, and I think it has value. What chance do I have of remembering every domain/url, or even the magic keywords that got me to the article in the first place. That may not lead back to it in the future. That's when I need a bookmark. I like to lightly tag them, which is possible in Firefox. Chrome feels lacking.
In Firefox you can use the awesome bar, start with a * and you will only search your bookmarks. I use that trick quite a bit.
I really ought to get around to aggregating my own bookmarks into a useful format.
I just wish the browser would do something smarter with my history, and marks. And leave window management to my window manager. I've tried lots of the tab extensions for Firefox - and I've found most of them lacking.
In fact, Chrome was based on this idea and it comes with bookmarks bar hidden by default.
I have hit this in every subject I've studied where I wasn't really motivated to learn the material. The reason this happens is that you reach a point where suddenly all the basic concepts start supporting the follow-ons. If you never really learned the basics, you will be lost. For example this happened to me with foreign languages. I had no real interest in learning French, but a foreign language was required in school. The first year was easy: basic vocabulary, stock phrases, and simple rules of grammar. The second year was not too bad either. But I never really learned anything... I would cram before exams and get decent scores, then promptly forget the material. Third year, when you actually had to put it all together... I was sunk.
If you're studying something, and can't motivate yourself to really learn the basics, you can only go so far. If you're studying computer science because you heard the jobs pay well, but you don't really care about learning the first and second material, you will hit that "leap of difficulty" at some point.
I think anyone can learn to code, or learn maths, but there's no doubt that some folks are MUCH better predisposed to it. That said, some good tips in the article.
I had to learn C in university and I applied for BA first and thought coding is for monkeys. I hated it, I didn't understand anything but I struggled trough. At some point it made "click" and I loved it ever since.
The funny thing is, afterwards I was biased in believing coding is easy and it has been for me from the beginning. It took me a while to realize how hard it was for me to start.
Anybody here with a similar experience?
What I've observed is that there are two types of learners. One set of learners starts from a set of instructions. These instruction sets are simple, enumerated lists. The learner executes the steps and looks at the outcome. As they advance, they might begin to explore deviations in the steps. Reasoning like, "What happens if I swap steps 2 and 3?"
The other type of learner cares less about the steps and more about the concepts behind the steps. They begin by asking, "Why do I do step 1" or "How does step 1 work?"
Everyone I've taken under my wing who engages in the enumerated list approach hits a wall at some point. When the complexity of the enumerated list extends beyond their ability to tweak inputs, they become frustrated, and ultimately give up.
The latter group tends to get further, but it's not a free pass to stardom. Many still hit a conceptual complexity wall where they fail to grasp some of the more abstract concepts.
The former group far outnumber the latter, in my experience. I'd posit that what makes learning to code so hard is that it is more accessible to a means of learning that fewer people possess.
Because it's f*cking complicated!!!
I personally found: 1. Code imitation - coupled with understanding2. Code reviews - of a next level programmer3. Blogging/Q & A(Stackoverflow sort) 4. CONFIDENCEas important in becoming a better programmer.
Too few employers are willing to pay for learning and evaluation.
Otherwise, you're just wasting everyone's time.
having signed up for the product since I want to be nice, and I've found all I've done is sign up for a mailing list of a product that might launch at some point in the future.
This isn't a startup to review or comment on, it's an MVP landing page test.
That being said, first of all and completely off topic, I appreciate you're in Heslington (York alumnae and still living in the north east). Perhaps you could consider spelling out on the site if you are planning on being a middle man, or an information centre, or a facilitator.
Will you show information for local box schemes, local farm shops, markets, independent cafe's?
I'm quite interested in seeing what you're trying to do, since I've recently had such problems searching for box schemes in the North East.
Any chance of finding some way of indicating the density of local suppliers before I sign up?
I would prefer to use OpenID or OpenAuth to creating yet another username and password combination plus I have no way of telling if you will keep them secure.
Also, you're fighting the likes of Ocado, Abel and Cole, Waitrose there. That's not a fight I'd walk into myself.
I want it organic, sliced, and delivered twice per week. I know that this sounds like a logistical nightmare, but that would be something with a crystal clear positioning.
However, I shouldn't have to signup to see if there is anything useful beyond the signup page; to see the actual content. You've lost me right there. I'd bet if you let the content show before a signup, you'd get far more (and far more meaningful) signups.
Key question for me - I can't see what I'm buying so how can I be sure that it is good quality/fresh? I assume some kind of buyer-rated reputation system for producers would be useful for this kind of thing? Would like to see how you resolve this.
Also - I can't tell whether you or the sellers are supposed to handle the actual transportation of goods.
Design-wise I can only say that it matches the expectations of consumers in the demographic pretty much spot on. And I like it :)
Would be good to follow your progress so will stick to Twitter for now, but in-depth blog posts would be welcome if you find time.
In addition to what others have said: I find the overall design good, but I don't like the stripes/greyness over the food image (http://www.farmly.net/images/home-banner.jpg) at all. Getting the color of food images right is always tough, but this definitely fails. For some images, e.g. the strawberrys, it may be fine because the red is very saturated, but others, like the cheese or the cupcakes... they look grey / too dark, which is not what food should look like.
Also, I'm not sure if the transition helps. Makes it harder to look at the images and takes the attention away from the text.
This isn't useful to me because I'm not in the UK currently, but my father would be interested, so I wanted to see if there's anything near him before I send him the link. Not interested in signing up to find out and plus I'd have to enter a false (my Dad's) postcode to do so.
I'm not sure people will order local food online in the US, but I think a case can be made for a tablet app that allows the user to sit on their couch and browse local farmers and read about a specific farmer's growing methods, farm photos, selling locations/options, etc. The end goal of the user would be to meet up with the farmer at a market, csa, ect to make a purchase.
Not sure how to monetize or populate it with farmers. Two-sided marketplaces are difficult.
What is the legal status of your startup as this will have an impact on the information you legally need to have on your site.
Maybe you should have titled it Ask HN: Review our startup, Farmly.net - buy fresh food in the UK (or something along those lines).
Plenty of guys from the UK here (I'm not :) ), good luck !
Does anyone have any suggestions for a serious secure phone? I don't need all the bells and whistles and don't install many apps. Mostly just email, text, and web browsing.
Ah, I see the original has an anchor in the URL: http://news.ycombinator.com/item?id=4928277
(No, I'm not a black hat -- I don't have a hat of any colour.)
That's not true. The compiler will give the function the most general type it can infer, which in this case is
holds :: Eq a => UnionFindElement a -> a -> Bool
The point of the article is sound - using the interpreter to ask for types is a great way to work (and often leads to surprising realizations which can lead you to generalize and abstract your code), but the reasoning is spurious. Most of the time, you don't need to supply types in Haskell, as the compiler is perfectly capable of figuring them out for itself.
I was a little concerned that the Python cmath library might be written in Python but from what I can tell, it's native-code from C.
LemaĂ®tre was well ahead on his time regarding machine computing. As early as the thirties at MIT, he used the machine perfected by Bush to solve the StĂ¶rmer problem. "
source : http://www.uclouvain.be/en-316446.html
Mac Plus (incomplete??):(http://www.bigmessowires.com/category/plustoo/)
MSX: (commercial product, I'm not sure if there's any code downloads)(https://en.wikipedia.org/wiki/1chipMSX)
The popular music ICs from a variety of machines in one box:(http://www.gadgetfactory.net/2012/06/introducing-the-retroca...)
Atari ST: (http://www.experiment-s.de/en/)
Atari bits n bobs (http://hardware.atari.org/vhdl/vhdl.htm)
A variety of different systems - Videopac; Adventure Vision; Colecovision; Bally Astrocade: (http://www.fpgaarcade.com/platforms.htm)
My last one blew up a few years ago when I tried to see if it would still work (capacitors in the supply had gone, took the board with it). So this project is really tempting for me, I'll probably see if I can get this to work.
FPGA's are interesting, a kind of half-way point between software and hardware.
With 16% cell utilisation, I reckon you could get a second processor and discrete TFT driver in there as well.
A worthy and commendable initiative. Chipped in what I could.
I had meant to follow this up last week, but it slipped my mind. So, thanks for posting this again. :-]
best machine learning site:stackoverflow.com "closed as not"
The only thing I'd probably add is that there's a pretty significant gap going from learning linear algebra to more advanced topics such as LDA.
For people who are just getting started with machine learning, it's probably best to get started with implementing some of the more "intuitive" algorithms such as decision trees, k-means, and naive Bayes before moving over to some of the more recent academic work.
Other things that are pretty useful, but often forgotten, such as feature selection, data normalization, and even data visualization. Algorithms are usually just one part of machine learning, but even the best algorithm wouldn't be able to do anything without identifying what the best features of your data are.
Still, it's a great list of more advanced topics, and definitely something I'll keep bookmarked for future reference.
It's important to understand individual algorithms, but in many ways it's more important to have a broad overview of the field and its more modern methods, so that given a problem it's possible to think about the best way to solve it, and to share a common language with others who may have ideas. Beyond this list and various online courses, I've found that talking to people about their work and explain the high-level concepts of every black-box classifier or similarity metric or whatever it is they use has been quite educational
I did note the absence of the oft quoted Andrew Ng's Coursera course on ML. I assume the author has put it under : "disruptive educational sites".
But genuinely want to know how Ng's course measures up to the other resources mentioned in this post??
So I got into the habit of saying "I don't understand". Inevitably, there would be quite a number of other people who also didn't understand, but were afraid to ask, so I'd just ask first. That stuck with me throughout my career and served me well. If you don't understand, ask.
Also, contrary to common sentiment, there is no minimum competency level that grants you the privilege of saying "I don't understand." Ignorance is not a monopoly of the elite.
Another personal favorite is stating unequivocally, loud and clear that "This was my mistake". It is tempting to just fix the mistake but even if you have fixed it, if there isn't clear declared ownership, you probably haven't addressed the root cause.
Doing this keeps you honest to yourself and also removes the awkward air where no one knows who is responsible for this mistake because no one has taken ownership. To pull this off you need an environment that won't punish mistakes by default.
There are absolutely fireable mistakes but if you do this right, the employee should volunteer to be let go because he realizes the gravity of his error.
The smartest programmer in my workspace frequently looks for me when he's trying to solve something hard. There are a dozen people around us, at least, who are better able to _solve_ whatever problem he's working on. But they don't ask as many questions. I ask a lot of questions. Midway through explaining stuff he's solved his problem.
And meanwhile I've learned a ton.
If I go to an university lecture on advanced math, I won't understand things, and can say so. But it's unlikely the lecturer can say anything in the span of five minutes that will make me understand, since what would actually get me close to understanding the content of that lecture are several semesters worth of studies leading up to it.
The senior devs might be the only people on the room who do have such a solid grasp of their stuff that they can fill in their understanding with just a few minutes of explanation. Junior people don't understand either, but they might need to work over the new thing for hours, not five minutes, to get a proper handle on it, and you can't give an hours-long answer to someone who says they don't understand.
To be honest most senior developers are guilty of not creating the right kind of environments for people to comfortably admit they don't understand something and then it brings the whole team down as a result. With exception of where I work now, the senior developers at all other large companies I've worked at made you feel stupid for admitting you didn't understand. There's no weakness in admitting you don't understand, but because of the way companies these days throw words like Agile and lean around, it's no surprise people are afraid to speak up when a company works in the form of 3 week sprints.
While it comes down to the volatile environments managers and senior developers have created over the years, a bad economy doesn't exactly help when it comes to admitting you don't understand something you were hired to do either.
Almost every environment I've been in - whether it be high-school or grad-school, a corporate setup, a startup,; I've found that "I don't know"/"I didn't get you" goes a long way. The other person in the picture usually goes out of their way to make me understand what I'm missing.
It's leagues better than working under the wrong assumptions for days, weeks, months...
And er, Donald Rumsfeld, when talking about unknown unknowns.
Reminds me of The Checklist Manifesto which cites how OR nurses and doctors who communicated best--they know each others names and nurses can tell doctors "stop"--had fewer surgical errors.
This also has the nice side effect of bringing a different perspective to an existing problem. It's a habit that has left several managers going "wow, you sure know what's going on, or you found problems that we'd not considered". I just shrug and reply honestly, "I'm just trying to understand."
It's of course also invaluable advice for students of any ages.
It works against the biggest problem we all have miscommunication.
I.e., You're trying to explain something and someone doesn't understand, you should be patient with the person. I don't think it's always (or even greater than 50%) the case, but enough times after finding out what the lynchpin of understanding was there are ways I could have improved my first explanation.
In a negotiation for example, if you admit you dont understand something, the other party can use that factor to take advantage of you.
It also hurts your credibility in front of a wider audience when giving a speech for example.
While I do think being humble should be respected, modern culture will look down on those who admit they don't understand.
Anyway, admitting you don't fully understand something is the first step to fully understanding it. Great article!
How do we create an environment where one doesn't feel it's wrong to ask for clarification without being subject to "looking stupid"?
Also, the LHC hiring protocol for students must have a attractiveness clause. ;)
(it is awesome :) )
For filtering in literal sense, i.e. address based packet null routing, all you can find is general carrier routers with their routing tables being dynamically manipulated by BGP commands sent by the GFW. You can't know where the commands come from.
For "filtering" described in this research, it's active connection disruption with spoofed tcp reset packets. The GFW mirrors traffic via some routers for detection and sends spoofed traffic for disruption. It doesn't have an IP address per se. This tool can find out from which router the GFW mirrors traffic, but not the GFW itself.
Here is a previous illustration on the topology of GFW networks: https://media.torproject.org/image/community-images/topology...