Wonder if that deal came with a poison pill. I doubt Apple wants to be a counterparty to a mobile licensing deal with Google at this point.
This is just my personal opinion with no more knowledge than anyone else, but this HTC acquisition looks different than Motorola. It has the hallmarks of an acqui-hire, and which implies Google may no longer be content to just sit by as a cornucopia of OEMs ship commodity HW using off the shelf stuff and small tweaks, as that's never going to pull the market forward like Apple can do with vertical integration.
The $330M price is so low none of the HTC investors are going to make any money on it. HTC's mobile phone business has been unprofitable for years and Google won't make any money on it either.
Who knows maybe Google is planning on using the HTC business unit as a sort of an R&D lab for Android hardware, with no real plan on making it a traditionally profitable business.
Still, an open question is why would they feel the need to buy an exisiting company? Couldnt they simply recreate a hardware company from scratch with their resources? HTC is not exactly a world leader or some unique innovator here.
Basically, I feel that completely removing the physical map is okay until you've picked a target. Then having to click on it to be able to see what the route looks like (which streets to take, etc.) is higher friction than I'd like. Instead, imagine if hovering would give you a route overlay, and as you hover your mouse over multiple places you're considering, you're already aware of the physical directions as well.
Having to click back and forth feels quite constraining.
This is simply feedback on a way I think it could be improved further, not to take away from how good it already is.
It pretty much makes a isochrone maps all over the city, and gets google public transport and driving times and creates a ratio.I've started to work on a better version 2, but so far not much work.
It's nice to see real thought, study and execution into new ways of portraying things that have the possibility of becoming stale. While maps and their functionalities are very much "still in development" with many developers adding new features to them... most of these "new features" don't try to rethink how we see and use them. They just extend the feature set instead of stopping and trying to re-think what a map is and what it is supposed to do.
It may be more effective to catch the train to a grocery in a completely different part of town than to walk 30 minutes to the one in your own neighborhood.
But I was never smart enough to implement it. This goes a bit along the way but hopefully someone comes along and implements that, I think I would find it very useful.
It'd be interesting to take travel data and cluster it such that you end up with an isopsychochronic projection. Commute visualizations I've seen end up feeling kind of close.
Any suggestions for the best (open) software to achieve this?
But distance isn't the only important geospatial factor. Frequently I want to find a place to eat/drink that's on the way to another destination (such as a movie theater). This kind of chrono map would be more useful in a new city in which I don't know that a place 0.2 mi away to the west involves crossing an interstate. In a setting I'm familiar with, it's probably not particularly useful on mobile (given the limited dimensions for showing points and text labels), but could be great on print displays. It'd allow designers to show geospatial/time info without also having to render a full map.
On the topic of Yelp and other listing services, maybe some refinements could be made to make lists more geospatially useful. No reason why the list view has to show just distance, rather than time traveled. Or to include a filter option for direction, so that I can just see things west or south of me. It's pretty frustrating sometimes having to switch back and forth between list and map.
I can't seem to run a search in my location in NY. "Find Me" changes the address but the results are stuck in Seattle. I can't seem to change the query term either.
If you center on the Central West End in St. Louis, you can clearly see that development has mostly happened in the western suburbs:
Nonetheless, an interesting concept!
I.e. I appreciate radius from center as a very useful representation of travel time.
But I would liberate X and Y to be things such as rating and cost (to give two likely examples).
Once you distort space so you might as well go all-in (in this view) and let it pack in two more dimensions.
The resulting clusterings would be very interesting and useful I imagine.
Computer science is probably one of the most over-documented fields. Everyone seems to have compiled a list of resources at least once in their life, like a rite of passage.
I'd love to see open source curricula for Economics/Business, Physics, Music, Literature and other stuff.
Try this one...much better.
A list of courses is not enough, haha
Has anyone put together a list like this for a subject like chemistry?
> A final thing about SPACs is that they are so expensive. Banks charge a rack rate of about 7 percent for initial public offerings, though big sexy tech IPOs tend to be done more cheaply. SPAC sponsors compensate themselves rather more lavishly. Hedosophia's sponsor -- a Cayman Islands company owned by Palihapitiya and his co-founder -- invested $25,000 to found the SPAC. In exchange for that nominal payment, and their work on finding a company to take public, they get 20 percent of the SPAC's stock. (They are also are putting in another $12 million or so to buy warrants in connection with its IPO.) A 20 percent fee for taking a company public is just ... more ... than a 7 percent fee. And that's not even counting the 5.5 percent fee that Credit Suisse charged for taking Hedosophia public! Something like a quarter of every dollar that investors are putting into Hedosophia is going to compensate financiers for doing the work of (ultimately) taking a unicorn public, which is a funny way to make that process more efficient.
What makes the tech echosystem thrive is the flexible capital and labor model. Anyone can get a little money to chase their idea. The small ideas get starved for capital and labor until they get market validation. Then the capital and labor chases them. And that's how great companies grow so quickly in a land of startups.
Anything that restricts mobility of labor hinders this and should be fought. (Example: Non-competes, cost-prohibitive real estate, etc)
Anything that restricts mobility of capital should be fought too. To have capital available for great ideas, it should be easy to flee ideas that aren't working out. (This is also why share buybacks from mature companies are fine - the capital get recycled)
Why would you want to business by the rules in the US when the rules only exist to prop up rent seekers and the ruling elite?
This site (https://jeromeetienne.github.io/AR.js/three.js/examples/mobi...) on my phone is telling me that my webcam can't be found. :|
I am not always positive about WebGL stuff, but this one run quite well on my devices.
When iPhones 8 and X came out, I was thrilled more so with iOS 11 featuring ARKit, but bummed out my A8-powered device could not run their advanced augmented reality software. The price points are high right now for me, and what the devices provide are incremental, making me wonder when smartphones will taper off like PCs did in terms of delivering power across new generations of chipsets.
For the time being, software like this allows hobbyists like me to play around with concepts I'd love to but are otherwise locked out of for now. Thanks for sharing!
This isn't a case where I _know_ I only want 2017 results, and so I do the syntax to filter it down automatically. I want all results, but I want to be aware of the timeline of whatever I'm going to click.
But to take the thought further: I can understand when a date isn't important. Say some documentation for a specific programming related thing. You'll probably learn to use !clojuredocs or something.
What about outside that? Those searches I can't quite describe without thinking, but my example above sort of works nicely because that game in particular has changed a bunch (and will continue to) over time and you do care about the date of a forum post or whatever.
For all I know, the answer is "that's when you use !g".
If anyone has doubts because they tried it years ago, I'd say go for it again.
DuckDuckGo isn't even a full search engine. They don't crawl the whole Web. The heavy lifting is done by Bing and Yandex. That allows DuckDuckGo to have coverage without much infrastructure. That's what makes the business possible without too much expenditure.
This was true when Google was created.
No one had the processing or memory available on their desktop to search an entire index of the "useful" web.
How large is a "useful" index of the web today? And can it fit on your laptop? The answer is yes.
Can the entire thing be queried fast? The answer is yes.
As an example take the entire stackexchange and wikipedia dumps in their entirety(including images). Compressed it comes to 50-60 GB range. Think about that number. That's an rough approximation of all known human knowledge.It's not growing too fast. It has stabilized. To query the content you need an index.
So how large is an index to a 100 GB file? Generally around 1 GB. Let's say you use covering indexes with lot of meta data and up that to 5GB to support sophisticated queries.
With today's average hardware you can search the entire thing in milliseconds.
So why aren't we building better local search?
Because everyone is conditioned to believe, thanks to Google's success, we need to do it online. Which means baking in the problem of handling millions of queries a second into the Search problem. Guess what? This is not a problem that local search has.
Every time a chimp or a duck needs to build a protein in it's cell it doesn't query the DNA index stored in the cloud. Instead every cell has the index. Every cell has the processing power to query that index in the nanosecond time scale.
The cloud based search story is temporary.
If you want to index every reference to Taylor Swifts ass that every teenager in Norway, Ecuador and Cambodia are making, then yes you need a Google size index. But for useful human knowledge we are getting to the point where we don't need Google scale.
If you don't believe me look at what is possible TODAY in Dash/Zeal docset search for offline developer documentation search or Kiwix or with Mathematica.
Some say the DDG bangs are a solution. What do I win by doing that? It only made me resort to !g all the time, because the results were so bad.
Now I use https://www.startpage.com/ with region set to Swedish. It's practically a proxy for Google search, so it gives me the right results but sans the filter bubble experience (yes, I want the regional bubble). If you're a non US user, I can recommend it.
The one silly thing I miss about not having DDG at work: in DDG, I can type "new guid" and it gives me a new random guid. If there's a way to do that in Google, I haven't figured it out. (And yes I know there are a million other ways to get random guids. It's just convenient for me to get them this way.)
Yes i have this often, and then when i click for example from page 5 to page 6 it suddenly says NO RESULTS and im always left flabbergasted with the thought "But google.. you just told me i got 11 million results to search through myself..."
This is Google marketing and brand perception at work because Google results of late, 3 years, have been unimpressive and you have to sift through pages of useless links and content to find any relevant information beyond the usual suspects one already knows, so their intensive spyware operations doesn't seem to help search quality.
It's surprising there are not more experimental search projects. One would have expected a steady stream of regular attempts but not a single credible effort exists. There was once an alternative search project called Cuil that just seemed to fizzle off.
Today, it's just Google. I've been using DDG for a few years, but about 1/3 of the time I add an !g because I don't find the results I need on DDG.
The cost of entry to the search market is exceedingly high right now. This is a pretty good article of detailing how one person was able to come up with an idea and challenge the behemoth in very niche areas (privacy/the nsa leaks were probably the reason I started looking at/using it around 2013).
Yet I still miss the days of using multiple search engines; seeing a variety of results. I hate the de factor standard of Google. When a company controls that much of search, they get to define the narrative. They literally shape the way many people perceive the world.
I wonder if tech will get to the point where indexing will be easier and we'll see more solutions that are cheaper and that can crawl larger datasets with lower processing requirements. Maybe the next step will be distributed search with shared indexes?
In any case, Google can't remain on top forever (at least I hope not). It'd be nice to see more tech in this space, but it's an incredibly difficult problem. There is reason Google climbed to the top like it did.
In the past I would switch to it when I'm feeling some google-morning-after-shame (e.g. after seeing some targeted ads), I would stick with it for a day or two, but eventually go back to the 'what I wanted is on the first page' magic of google.
I've used DDG more consistently since changing the firefox search default, but there are still some things that I end up googling - sometimes DDG shows too many irrelevant results on the first page.
I think paying more attention to the bangs and moving away from 'keyword' searches will probably help - after all a tool is more useful if you learn how to use it properly - but for some topics (e.g. Haskell examples) if the first answer it finds isn't what I was looking for, the next couple of pages of results are usually useless too.
All you need is a keyword. For instance, lets say Im looking for a new computer, I insert the keyword in the search box new PC and all you have to show me are ads related to that. I dont necessarily have to see all the things Ive been looking for in the past.
Seems eminently sensible to me. And much more likely to produce ads that are relevant to what I'm looking for right now.
Quick tip: Rather than falling back to Google, try the "!sp" bang command for StartPage, which crawls Google to supplement it's results.
I love DDG and what they stand for, and I'll gladly trade the creepy personalized results for the more organic results I get there.
It might be very good. But can such a system compete with one that does use search history as context?
It's my default search engine and without fail when a non-tech colleague is over my shoulder for an internet search they bust out laughing at the name and are insistent that it's a prank website. I persist and calmly explain. They relent and give that look you give a crazy person you don't want to argue with.
For example, lets say I want to look up ear infection. Google will spit out a bunch of info right on their results page, often saving me the trouble of even going to another site. DDG however will just give me 10 webmd links.
If they want to compete, the privacy angle isnt enough. They need comparable functionality as well.
Showing ads according to a search term is totally possible, but having no attribution attached to a "click" or to an "impression" give very little advantage for the marketer who is paying for the ads. There are cost models allowing to pay for the actual action (like installing an advertised application) not just for clicks or views. Re-targeting people who had expressed their interest in a product is a useful tool for marketers as well. Having some kind of link back to the advertising campaign, which your users came from along with their LTV allow you to measure campaign productivity, which helps optimize future campaigns. And much more.
I really like the idea of not being tracked on the Internet, but it's seems like currently it's not feasible to remove a tool many marketers get used to.
Also another vote for Blue Iris. It's not the most user friendly software but it's awesome for the price. It can be tricky to get configured just right though...
I would love to hear if anyone has any recommendations for getting Blue Iris to do some of the features of this camera, such as compare face images against a Google Photos API?
I would also be curious to know if the cameras that move & try to auto focus are actually better. That seems like a potential plus but I could see wind blowing trees or a squirrel running causing the camera to focus on the wrong things. Maybe the Nest is smarter & only goes for certain objects that it can recognize... Motion sensor recording is kind of a pain to get right on my Blue Iris. Windy trees & spiders cause me to get a lot more video than necessary.
I don't get the point of expensive, tamper-resistant cameras for home use. I'd rather have more cameras.
Its actually humorous to see these old school guys who have enjoyed having a low churn subscription service cash cow for decades, be slowly disrupted.
Total denial at every turn LOL.
Why would anybody want to spend less than a 1/3rd the hardware cost, no installation cost, can be moved when you moved, and no monthly monitoring fee. They just don't get it.
I sent them links to Nests product announcements today. It'll be interesting to see their response.
Anecdotally, I've a Synology NAS and any* IP camera can be hooked up to it, and there are plenty with PoE, etc so what's stopping me having a camera system at home is figuring out how to drill through 60cm thick walls made of three layers of brick, insulation and render without compromising the weather seal of my house... not lack of smart features.
ps. does anyone trust wifi cameras?
(* not quite every camera is supported)
- No wiring
- Battery lasts up to a year depending on settings
- Can be plugged into power
- No internet, software, or 3rd party dependency
- 10MP photos, HD video
- One-time cost of $100 + $15 SD card
I don't need to view photos or videos remotely, I only need evidence if something were to happen, so this is an acceptable solution for me, especially considering the one-time costs.
The motion software is not great, but it's nice to know that my data isn't being sent off to china or some other bullshit proprietary service.
My next one will be an outdoor camera, and if I can get it to work with a converter (https://www.amazon.com/GE-54276-Polarized-Handy-Outlet/dp/B0...), I think that will take out almost all of the wiring work.
Well criminals don't have to break in, just swipe the $350 expensive toy and walk away.
I put up some Ubiquiti cameras last year, we live on a pretty quiet street but we had some kids pulling shenanigans (a car window was shot out with a BB gun, our Little Free Library bench was drug out into the street, change being stolen from cars).
Since the cameras, nothing has happened, so that's a plus.
But the Ubiquity recorder device makes the system nearly unusable. For some reason, recordings usually stop while there is still motion going on, so I end up with a lot of little 15-30 second videos that show only the start of whatever is going on. I've fiddled with it a lot and had no luck.
I've spent so much time screwing around with generic Chinese hardware that's, well, sometimes great (and sometimes truly worthless). Most generic brands (that exist for 6 months at a time) have software/firmware that leaves much to be desired, and is sometimes outright broken, but at least more often than not at least implement (most) of an open-source IP-cam standard (unlike the fancy expensive cameras).
But I have no interest in trading any sort of even theoretical access to video of my home for a fancy web interface: http://shodan.io/
Most home users aren't going to pay full retail for cameras, hire a professional installer to put 1-4 cameras outside, then pay for cloud storage... or at least not without a huge amount of Support effort. Given their feature list, they aren't going to be able to compete with the business/enterprise market due to both having worse features AND being more expensive.
(shameless plug: I work for a company making enterprise IP cameras, and both our cloud storage AND cameras are cheaper, while having way more features)
Or spend $42 and get it's outdoor parent:
My requirements were, I thought, fairly simple. Good quality outdoor camera which uploaded motion off-site (cloud, FTP, whatever).
Apparently, that doesn't exist.
Most of the cameras fail the first requirement. Terrible video quality, tiny field of view, poor build quality that results in frequent failures, and/or bad track records with security bugs.
Some do okay on the first mark, but fail on the second. It's beyond me why most security cameras don't have at least _some_ ability to upload motion off-site. I have an Amcrest indoor camera, really terrible, but at least it will automatically upload motion to an FTP server. But it seems none of their outdoor cameras upload motion; only still images. !?
Hikvision cameras were frequently recommended as capable and good quality, but they were riddled with security bugs. Totally unacceptable for a _security_ camera.
Nest, from the article, has good build quality, and AFAIK no serious security issues. But no normal American can actually use them, because for all intents and purposes they require ~2 Mbps of constant upload bandwidth per camera. Google being Google requires the cameras upload all footage, 24/7 to their servers. For decent quality, that means 2Mbps of bandwidth. There's really no other way to use the cameras without doing that constant upload. Most Americans don't have much upload bandwidth. I only have 10 Mbps egress, which means each camera eats 20% of my bandwidth. Totally useless.
Not to mention uploading 24/7 live footage of your home to Google, and paying hundreds of dollars per year in subscription fees for the privilege.
Cleverloop sells a system similar to Nest, but they they do all their machine learning motion detection and such locally. So you don't have to burn upload bandwidth, and you don't have privacy issues. Plus no subscription fees. But their cameras are unreliable, 720p, limited FoV, and not standards compliant so you can't use them outside of Cleverloop's system.
There was another system, who's name I forgot, which was all around good and met all the requirements ... except that apparently all the cameras have a bug where they often delay recording after motion detection for 5 seconds or so. Wow...
All the other systems fall into either poor quality, buggy software, security issues, etc, etc.
Basically, for the average consumer there are _no_ good options. I'm an engineer so I have the advantage of getting my hands dirty customizing things but most people don't have that option. Most people need a system that just works. It's totally crazy that such a thing doesn't exist. And personally, I'd rather not spend my time building a custom camera system. I, too, want something that just works.
Well, after days of research the solution I landed on: I bought some plain ole PoE cameras from Costco. They have good specs (4MP, 90 FoV) and I trust Costco to vet for quality. Still fairly cheap (~$150 per camera). I bought an old small form factor Dell, Core i5, 2TB storage for $250 to act as the local recording station. I'll slap some software on there, ZoneMinder or Blue Iris or something, and hopefully get it to detect motion and upload those clips to the cloud. The cameras will also automatically upload snapshots over FTP (seriously why can't they just upload video!?) as at least a redundant system.
I'm just aghast at how difficult and cruddy all of this is.
That's all a camera does, evidence, after the fact, unless you're Scarface and monitors everything.
I guess I'm probably not the target market for this though, but if you're a tinkerer there are some interesting options. And fun to be had in the firmware reverse engineering department too.
I have gone through the entire gambit, having bought Arlo, Nest, Foscam, and trying Blue Iris, etc and can definitively say that Nest is the easiest and most reliable outdoor camera.
That said, I'm not going to buy this trumped up version of the camera mainly because I don't know what it buys me over the currect camera. The fact this has made the front page is kind of weird to me.
Note that this may very well violate certain state wiretap laws that require two party consent.
Ones that got caught.
Now burglars will cut your internet connection to the outside, then rob your place.
Is there some improvement in the technique beyond that - e.g. some clever speedup across the board?
Side note: local-global approximation seems to do well in graphics and visual tasks. For example in the field of alpha matting the state of the art for a while was KNN Matting which sampled locally and globally. Most methods since then have taken a similar approach.
Finally, realistic-looking GI in real time.
Games etc. really need that stuff in real time.
That's not dismissive -- no one has ever made any program that outputs a string of images indistinguishable from a real camcorder. It's just that hard.
I think whatever the next leap forward looks like, it will come from a nontraditional approach. Something strange, like powering your real-time lighting model by an actual camcorder -- set it up, point it at a real-world scene, then write a program that analyzes the way the light and color behaves in the camcorder's ground truth input. Then you'd somehow extrapolate that behavior across the rest of your scene.
That last step sounds a lot like "Just add magic," but we have deep learning pipelines now. You could train it against your camcorder's input feed. Neural nets tend to work well when you have a reliable model, and we have the perfect one. So more precisely, you'd train your neural net against the camera's input video stream: at each generation, the program would try to paint your scene using whatever it thinks is the best guess for how the colors should look. Then you move your camcorder around, capturing how the colors actually looked, giving the pipeline enough data to correct itself. Rinse and repeat a few thousand times.
The key to realism, and the central problem, is that colors affect colors around them. The way colors blend and fade across a wall has to be exactly right. There's no room for deviation from real life. Our visual systems have been tuned for a billion years to notice that.
There are all kinds of issues with this idea: the real-world scene would need to be identical to the virtual scene, at least to start. The program would need to know the camera's orientation in order to figure out how to backproject the real-life illumination data onto the virtual scene. But at the end of it, you should wind up with a scene whose colors behave identically to real life.
It seems like a promising approach because it gets rid of the whole idea of diffuse/ambient/specular maps, which don't correspond to reality anyway My favorite example: What does it mean to multiply a light's RGB color by a diffuse texture's RGB value? Nothing! It's a completely meaningless operation which happens to approximate reality quite well. There are huge advantages with that approach, like the flexibility of letting an artist create textures. But if the goal is precise, exact realism as defined by your camcorder, then we might be able to mimic nature directly.
(Those dynamic occluders looked incredibly cool, by the way!)
loop do DB.transaction do # pull jobs in large batches job_batch = StagedJobs.order('id').limit(1000) if job_batch.count > 0 # insert each one into the real job queue job_batch.each do |job| Sidekiq.enqueue(job.job_name, *job.job_args) end # and in the same transaction remove these records StagedJobs.where('id <= ?', job_batch.last).delete end end end
Then the enqueuer can do a preliminary scan of the table when it boots up and then just a `LISTEN` instead of polling the DB.
I've taken fire before for suggesting that any job should go into a database, but when you're using this sort of pattern with an ACID-compliant store like Postgres it is so convenient. Jobs stay invisible until they're committed with other data and ready to be worked. Transactions that rollback discard jobs along with everything else. You avoid so many edge cases and gain so much in terms of correctness and reliability.
Worker contention while locking can cause a variety of bad operational problems for a job queue that's put directly in a database (for the likes of delayed_job, Que, and queue_classic). The idea of staging the jobs first is meant as a compromise: all the benefits of transactional isolation but with significantly less operational trouble, and at the cost of only a slightly delayed jobs as an enqueuer moves them out of the database and into a job queue.
I'd be curious to hear what people think.
Just be sure to run your enqueueing process as a singleton, or each worker would be redundantly enqueueing lots of jobs. This can be guarded with a session advisory lock or a redis lock.
Knowing that this easy transition exists makes me even more confident in just using Que and not adding another service dependency (like Redis) until its really needed.
Doesn't this just mean bunch of lost jobs when redis fails.
Why not keep jobs with job state wait, done, etc in the reliable ACID store.
There's a million things wrong with the lifestyle. Even if you don't die, you still do in a sociocultural sense. Divorce rates are near 100%. Nearly all of your job is classified or top secret, so you can't discuss it with anybody. Even your psychiatrists have to have top secret clearance just to talk to you. It's nearly impossible to have any commitments (social, financial, educational, etc.), because you can be pulled away at any moment, and you can't even tell people when you expect to return. Bankruptcy is extremely common, and nobody has good credit. It's nearly impossible to leave and do any other occupation successfully.
And yet, even though he left the green berets to become a mostly unsuccessful but still boring CPA, and even though I basically have no relationship with him anymore, I still have this undeniable urge to be a part of it. I read the books, seek out the stories, devour any news of operations, etc. And while my dad paid lip service to how bad it was, he's secretly proud of his two special operations sons and he clings on to their lives, trying to relive his own.
(This also made me realize I have a weird saddle-curve tropism: articles that are clearly on topics I care about I always click on, even though they often turn out to be me-too. Stuff that's mildly interesting looking I rarely click on because who has the time? But stuff that looks like it's clearly not-HN fodder I'll click on. This means I'd probably click on a seeming listicle or something with "Kardashian" in the title)
It's an intimate interview with a Vietnam vet (the green beret). A truly beautiful piece of radio IMHO.
Why, is Twitter search able to go back more than 2 weeks this Wednesday? Any wagers on what the look-behind horizon will be in a hundred years? No odds offered for "0".
Now compare that with the Spanish government. Banning a referendum, raiding political parties and Catalonian government offices, arresting elected governors, flirting with extreme right ideas, de-facto suspending the Catalonian autonomy and basically abandoning any democratic dialog.
Now, there is no going back. I wish the best of luck to my friends in Catalonia. Shall they be allowed to express their desire to leave or stay in Spain in peace. I wish no one to get hurt during these turbulent weeks.
(the spanish police seem to be raiding anything that could potentially be related to catalonian independence)
1. Use the := syntax
:= means that a variable is immediately expanded; = means it's delayed until the variable is used which on a large Makefile has a speed penalty. If you know that a variable is fully defined (i.e. all the $(...) references in its value are fixed) when the variable is being defined then use :=. That's the case here where the Makefile starts:
GOCMD=go GOBUILD=$(GOCMD) build GOCLEAN=$(GOCMD) clean GOTEST=$(GOCMD) test GOGET=$(GOCMD) get BINARY_NAME=mybinary BINARY_UNIX=$(BINARY_NAME)_unix
GOCMD := go GOBUILD := $(GOCMD) build GOCLEAN := $(GOCMD) clean GOTEST := $(GOCMD) test GOGET := $(GOCMD) get BINARY_NAME := mybinary BINARY_UNIX := $(BINARY_NAME)_unix
build: $(GOBUILD) -o $(BINARY_NAME) -v
build: ; $(GOBUILD) -o $(BINARY_NAME) -v
Gopath is one of the dumbest ideas I've seen. Multiple large projects under the same directory? Who thought that was a good idea. (and yes, I know you can create subdirectories, and mix all your packages together to create a giant mess.)
So I separate the projects into their own directories.. and use a Makefile to set gopath and run the go commands.
Still trying to figure out why golang tried to be innovative here.. What couldn't they use cwd like every other compiler I've ever seen?
# Go parameters GOCMD=go GOBUILD=$(GOCMD) build GOCLEAN=$(GOCMD) clean GOTEST=$(GOCMD) test GOGET=$(GOCMD) get BINARY_NAME=mybinary BINARY_UNIX=$(BINARY_NAME)_unix
$(GOBUILD) -o $(BINARY_NAME) -v ./... CGO_ENABLED=0 GOOS=linux GOARCH=amd64 $(GOBUILD) -o $(BINARY_UNIX) -v
In my opinion, this level of abstraction may be going a little far:
GOCMD=go GOBUILD=$(GOCMD) build GOCLEAN=$(GOCMD) clean GOTEST=$(GOCMD) test GOGET=$(GOCMD) get
By just invoking them directly in the targets you'll come out with something more readable and no less maintainable:
build: go build -o $(BINARY_NAME) -v
I never used Go so the following may not be technically correct, but in principle:
- The `build` rule should be `build: *.go` so that it builds only if a file has changed.- The `run` rule should have the `build` rule as dependency, so as to not rebuild if nothing has changed. Plus, it avoid to repeat the almost-identical build command (which is not very DRY)
Plus, I'm not convinced of the usefulness of having a $GOCMD variable (or $GOGET, $GOBUILD...). But again, never used go, no idea if the tooling is as tweakable as C (for which this kind of variable is used a lot in makefiles)
go run build.go build release
> If the project uses CI/CD or just for consistency, it is good to keep the list of dependencies used in packages. This is done by the deps task, which should get all the necessary dependencies by go get command.
This is a very strong anti-pattern. Everyone should be using a dedicated version management tool (ideally dep, but there are others), which handles this seamlessly and behind the scenes. Introducing another place where dependencies are tracked and can be installed is a recipe for problems down the line.
There should be a BINARY_NAME target that 'run' depends on instead of repeating the build command.
The source files should be named as dependencies when they are inputs to a build step.
Anything that isn't a file should be listed as .PHONY target. Otherwise, make gets confused if you, for example, create a directory named 'test'.
If the goal of your makefile is to make your workflow simpler and easier to discover, consider adding a 'help' target that at least describes the interesting targets. It might also direct users to a more involved explanation if there are interesting things about your project they need to know about.
Why use a platform specific tool and restrict yourself to Linux? I'd prefer people find a similar tool that is designed to work on any platform.
On another thought... no. I'll stick in 2017 with modern tools and languages.
Inside Google, there are BUILD rules for Go, but it looks like they haven't been open-sourced in Bazel yet.
When I had kids and started walking them around in a stroller, you learn really quickly where the sidewalks with ramps are, and you (and your toddler who likes to help!) come to appreciate the buttons that open doors automatically.
I run as well, and it's not fun to trip on uneven sidewalks. Sometimes at night I'd rather run on the road where I can count on a more even surface and no branches hitting me in the face. I think I'm more inclined to shovel my sidewalk in the winter because I don't like running on compressed snow that melted in the sun and refroze overnight.
So...yeah! Fix this stuff for disabled people, and other people get to benefit. Sidewalks are for everyone.
Pavement cracks. It should be fixed, but that takes time and money. And able-bodied pedestrians are unlikely to report minor cracks. Shouldn't a $30,000 wheelchair be capable of traversing commonly encountered impediments? Cracks in pavement, small branches, doorway thresholds? The article states her wheelchair cannot. That seems like a major design flaw to me.
"...plus only 16.9 percent of Atlanta households have no vehicle..."
Really? "Only" 16.9%? If I told a product manager that "only" 17% of people use iOS 9, so we don't need to support it, I'd get laughed out of the meeting.
It's a tradeoff, like everything else. When taking the trees into account, Atlanta's sidewalks are about as well maintained as the rest of Atlanta's infrastructure
PS: every time I visit Europe I often wonder that there are so many wheelchairs. In Eastern Europe you may go around capital city center for weeks without seeing one. It is that bad, even in 2017.
Michigan recently passed legislation protecting local municipalities from lawsuits where sidewalks in disrepair are to blame.
While Michigan (esp. Detroit) is exceptional in many respects this kind of legislation whereby neglected infrastructure is no longer a liability for the state is likely to spread.
The sidewalks in Detroit are especially heinous.https://iainmait.land/img/photos/1920/street_crossing_2_2017...
In many Silicon Valley towns like Mountain View and Sunnyvale the sidewalks suddenly just stop and youre forced to walk on the side is the road.
And in Cupertino many streets arent lit or very badly lit that its almost pitch black...
I never noticed these things before coming to California and I grew up in a small town near Boise, ID.
Streets were well lit and sidewalks did not just end.
And Idaho has a minuscule State tax revenue compared to California is so it shocked me this was an issue in these high income Silicon Valley towns.
Pissing off motor traffic is probably the fastest way to get the sidewalks fixed.
Even if it is illegal, I can't imagine a judge not throwing out the citation if you show pictures like that.
edit to address the replies: my suggestion was admittedly somewhat tounge-in-cheek and, no, I wouldn't recommend anyone take the risk. Would I do it as performance art? No, because a judge/newspaper/observer would see it as a stunt undermining the whole argument and possibly even harm the cause.
In the frozen north where the frost depth is four feet, somehow our sidewalks look new compared to the pictures. Mere simple mismanagement in Atlanta. The sidewalk in front of my house was poured in 1983 per the stamp, and its basically level and even. That's after 34 northern winters. The decayed sidewalks in the article must be over a century old or horribly corruptly installed.
There must be a severe lack of parking or even alleyways. If you want to live the hyper high density urban bug man dream, there will be costs such as no where to park and no ramps from parking to the street to drive wheel chairs upon.
There might be more to the story intentionally not reported so as to slant the coverage; perhaps its a historical district where if there were no ramps in 1830, then installing a ramp in 2017 will get you sued by the historical commission. Again, if you voluntarily live somewhere unsuitable for modern standards of living, there is no reason to feel sorry for a fool, if that is the case.
in the story, why didnt the husband go get the car and pick her up?
The roads, though? Promptly plowed and salted by the city on a continuous basis.
"How dare cities in southern states refuse to build sidewalks for their residents! Sidewalks are essential to quality of life!"
There is no mention of money. No mention that people in southern cities PREFER lower taxes which means less government services.
For anyone from the New England or Colorado or the West Coast: you have great livable cities with lots of amenities but those come at the cost of higher taxes and more regulations. Southern cities may not share your civic culture.
After one particularly harrowing crossing, I insisted that she call the City and ask about what they could do. She just brushed me off, saying that they wouldn't care. I pushed her and pushed her, but she wouldn't do anything.
So I called City Hall the next day, and I said, "Hi, I'm Jemaclus. I'm in a wheelchair and attend the university. There's no wheelchair ramp at the intersection of 1st and Main, so I have to go down the street and cross from someone's driveway. Is there anything we can do to fix this?" (Yes, I made a white lie.)
The lady snapped into action immediately. "Don't you worry. I've got this," she said.
The very next day, construction workers were putting in an accessible ramp on the corner, and four days later, my friend and I were able to safely and securely cross the intersection.
I have two hypotheses that are not mutually exclusive about what happened:
1. The City probably has a budget allocated to ADA compliance or infrastructure improvements or something. It's also probably a use-it-or-lose-it situation. If you call and point out something, they're HAPPY to do it, because otherwise they lose the money next year.
2. The City definitely doesn't want to be sued for ADA non-compliance, and they will probably move with all haste toward a reasonable solution. In this case, it was a no-brainer: a wheelchair-accessible ramp at a busy intersection with tons of pedestrians.
I would encourage anyone who runs into these kinds of problems in public areas (the author's problem was actually private property, so...) to actually call the City Hall and politely inform them of the problem, explain how it's dangerous, and ask how "we" can fix it. In all likelihood, they will probably respond immediately and in a positive fashion due to the above reasons. It might not happen overnight like mine did, but eventually they'll want to cover their asses from an ADA lawsuit.
For the record, my backup plan would have been a letter to the editor of the local paper, and then talking to an attorney.
I suppose the car makers lobby really killed proper city planning in such places.
I live on a small farm in the middle-of-nowhere NC. (20min to even a gas station) I'm currently working with our local zoning dept. to get approval to use my barn in a home-occupation. Even though I'm far away, will have zero employees, and no customers on the property - I'm STILL being forced to spend close to $3K to retrofit for ADA compliance.
Life is not fair. Trying to sue and regulate this country into everyone's idea of "fairness" will bankrupt us and destroy any concept of freedom we still have left.
Using banked curbs everywhere is another more useful alternative to normal sidewalks (less useful to some pedestrians but create a larger perceived separation from traffic than a shoulder and impossible to kill a tire while parallel parking)
Edit: At least offer up a counter opinion if you're gonna down-vote.
I briefly worked for Boeing once and they were working on a new fuel boom for the KC-135 Fuel tanker. Spent millions in research and then some idiot takes a bunch of photos of the blueprints and physical hardware and then sells it to someone in China. Thankfully he was busted. A company just trying to make a profit on what they spent millions developing is not inherently evil.
> Mr. Liu adamantly asserts his innocence and we fully expect hell be exonerated after a careful review of the evidence, said Robert Goldstein, Mr. Lius defense attorney.
The best-case scenario still seems reasonably illegal here
>On that evening in late August, Dr. Straface said he introduced himself to Mr. Liu as the CEO and asked who Mr. Liu was and what he was doing in the office. >Mr. Liu mumbled at first, then said he was there to visit the companys head of intellectual property and also the sales director for the European division, according to Dr. Straface. >At one point he mentioned that he was here to do business with the CEO, not seeming to realize he was looking at the CEO, Dr. Straface said.
> The FBI confiscated an unusual amount of computer equipment that Mr. Liu had brought with him, including the laptops and tablet and also two smartphones, a smartwatch, a computer thumb drive, two digital video cameras, several SIM cards and high-capacity storage drives, according to the affidavit.
The FBI confiscated an unusual amount of computer equipment that Mr. Liu had brought with him, including the laptops and tablet and also two smartphones, a smartwatch, a computer thumb drive, two digital video cameras, several SIM cards and high-capacity storage drives, according to the affidavit.
If the guy pleads not-guilty, I wonder how he'd explain why he had that much equipment with him for a business meeting.
On the flipside though, industrial espionage is a wealth transfer that reduces inequality. I recently purchased tires for my car - I got some Chinese ones that were 1/2 the price of the other tires and the Chinese tires are shockingly fine. I don't know how they'd get the price and quality without benefiting from industrial espionage.
Perhaps something more ideal would be to opensource everything that's been stolen. It'd encourage companies to take security seriously and it would reduce the value of the stolen property.
Lastly, I kind of wonder how much of this problem is due to companies insisting on removing dependencies from certain people / groups. Think of all the documentation people create 'in case you get hit by a bus tomorrow'. When my grandpa worked, it sounds like it was fine to have undocumented things so long as people knew about it (e.g watch out for that machine, it's sensitive to humidity and here's how to reset it). Now, I'm sure some middle manager would insist that the 'operational knowledge' be documented and stored somewhere that becomes that much more lucrative for a thief.
Ideally Equifax will listen and either move it to equifax.com, or take down the site altogether. Since the real version seems to be answering randomly, they may as well just shut the whole thing down.
But seeing as they're a massive, bumbling, bureaucratic organization, there's probably a non-zero change they'll try to sue me instead.
If there are any lawyers here, am I in potential legal hot water for making this site?
Combine that info with https://techcrunch.com/2017/09/08/psa-no-matter-what-you-wri... and it's enough to throw one into paroxysms.
This chaos is maddeningly absurd, and in a just world their business would be completely shut down by the government.
At this point Equifax has repeatedly demonstrated nothing but contempt for people whose information they have compromised. When are the authorities going to padlock their doors and shut down this continuing criminally reckless enterprise?
We initially asked them if they had an updated version of this API using XML or JSON, and it turned into a call with several of their salespeople trying to upsell us on some complicated drag and drop rules engine that happened to return data as JSON. So we just stuck to the legacy API. They struck me as a pretty incompetent organization.
Image backup/mirror of the tweet for when they eventually (?) delete it. As of this comment, it's still up, nearly 20 hours later.
It definitely looks like ol Barb in accounting has a nephew that builds web pages. I bet hed build it on the cheap!!1!!
Its time for this company to go away.
The constant bungling on Equifax's part would be hilarious if the potential impacts weren't so sad.
The request could not be satisfied.
The Amazon CloudFront distribution is configured to block access from your country. Generated by cloudfront (CloudFront)Request ID: ZU-LJh21L1Px18Bz5n20R3Nb1aApdzyce_Q6ZeeSIZ0OYiJk2v0eIA==
Obviously this does nothing for the information that's already compromised, but if enough people do it, it would help kill off Equifax (lenders will rely less and less on it, thus depriving them of revenue).
I hope they aren't big to be held responsible.
I suppose you don't care, but it should be "lose" :-)
All jokes aside, every time I try to explain to a "normal" what is going on in "computer security" I feel like shit. The entire industry is a tire fire. And it's getting worse.
At least we have DRM in the browsers now, eh?
Forcing young children to learn specific topics can only lead to one outcome: a destruction of man's natural desire to learn about the world through association of learning with pain.
Kids learn all the time at an incredible rate. One day, my daughter came home and said, "Daddy, look at this picture I made: a dog, a cat, a pig, a horse, a cow, a chicken! These are all animals that live on a farm!" Yet when I ask her "did you learn anything at school today," the answer, invariably, is "no."
To switch out this kind of early childhood education with a curriculum of forced learning intended to mold the child into a bureaucrats ill-conceived vision of a virtuous society is monstrous.
This article kind of reminds me of the super-high-pressure environment of Japanese education, where test scores are the be-all-end-all until you (hopefully) become the appendage of some state-supported hypercorp.
I've heard somewhere (I can't remember and my search-fu is weak) that "People need bread, but they need roses too", where bread is what makes life possible, and roses are what makes life worth living (like art and such). We hyper-optimize for bread, and burn the roses to save on fuel.
This shit makes my blood run cold.
(And before anybody jumps to the "homogenous society" explanation, Finland used to have mediocre schools before they put a lot of effort and money into them in the 70s and 80s.)
"New education alternative available to some children."
"Wealthy suburban parents begin turning to new education alternative."
"Public outcry over lack of education alternative available to middle and lower class children."
"New education alternative now being implemented in public schools."
"Wealthy suburban parents unhappy about options available in public schools, begin looking elsewhere."
I am very thankful that my parents ignored any of the hyped trends and just gave us both the most convenient education experience possible. Unfortunately, I think there is a spirit of constant, hand-on meddling that our generation has developed.
To be honest, it was really sad to hear him say that. He's a kid and doesn't understand the schedule being imposed upon him.
Without getting into a discussion about the right number of hours a 7yo should spend in class I wanted to ask another question.
What ideas have parents come up with to make it "feel" like there's more unstructured play time for kids during the school year?
But everyone is different. Our kids learned well by being allowed to pick what they studied. That may not be suitable for all children, and some, perhaps even many or most, might need more structure and rigor earlier.
See this link for more information on this approach to learning:https://en.wikipedia.org/wiki/Reggio_Emilia_approach
> New research sounds a particularly disquieting note. A major evaluation of Tennessees publicly funded preschool system, published in September, found that although children who had attended preschool initially exhibited more school readiness skills when they entered kindergarten than did their non-preschool-attending peers, by the time they were in first grade their attitudes toward school were deteriorating. And by second grade they performed worse on tests measuring literacy, language, and math skills. The researchers told New York magazine that overreliance on direct instruction and repetitive, poorly structured pedagogy were likely culprits; children whod been subjected to the same insipid tasks year after year after year were understandably losing their enthusiasm for learning.
Without knowing more about this study, it seems vastly more likely that sociological differences between the preschool and non-preschool groups explain this difference.
Shouldn't we be demanding at least a certain amount of uniformity in our society to ensure we are a society?
With this, and with only a little deliberate effort, the children will naturally enough learn about emotions, language, communications with others, how to interact with others, and much more.
In one word, it's called motherhood.
Do I understand motherhood? Nope. Neither does anyone else, even including the mothers themselves. So, no one knows how to replace motherhood.
To replace motherhood, we need new laws from Congress, new Federal funding, lots of achievement tests and numerical measures, researchers in child development, educational theorists, educators with special training in pre-school, kindergarten, pre-school, No Child Left Behind, Common Core, some AI robots with life-like plastic skin? Nope!
The bigger picture is that not every kid is the same, and that's shaped by the child and also the parents. Our parenting style is a combination of firmness around core behaviors coupled with plenty of freedom beyond that.
It's obvious when interacting with other kids and parents that other people have very different styles. Some of the other things other kids do, like behaving crazily at the dinner table, we would shut down immediately. Other things, like putting their 2nd graders into "Kumon," a kind of cram school with lots of busy-worksheets, we avoid. Check out the messaging and branding here to get some idea: https://www.kumon.com/. Apparently on the first day of Kumon they show kids the logo and tell them that the kid isn't smiling because learning is not fun, it is supposed to be hard work. I found it horrifying that parents felt the need to push their kids into that kind of environment to "keep up".
On the other hand, the progressive school is definitely touchy-feely and there's a sense that there's less structured learning. Fortunately it seems there's a balanced approach as they get into higher grades where progressive units of inquiry are augmented with some book learning. It takes an act of faith to believe that this will ultimately pay off - that they are not "wasting their time" just playing all day. Who wants to gamble with their kid's future?
Outside of school, we try to think "anticipate and encourage" not "force feed".
For example, when we visited friends and our son always gravitated towards playing with the piano, we got him a keyboard and let him play with it at home. Soon enough he was asking "why don't I have a piano teacher?". So we got him one. He generally likes it, and practices largely on his own with just a few reminders.
On the other hand, we took him to play soccer and after two games where he refused to step foot on the green, he insisted "I don't like any games with balls. Except dodgeball." So we didn't force it, but we did have him go to martial arts, which he enjoys. Maybe we'll have him try out socceer or baseball in the future - but only if he shows an interest.
The same goes for reading. We put lots of books in the house and also spend our own time reading physical books, so they can see us. They get curious what we're doing when we're not paying attention to them. While they aren't as advanced as some of the other kids in class, they are enjoying reading every time they pick up a book. And that's all we want.
That's exactly what pre-school is supposed to be.
Now it's "If you don't get into the right pre-school, you won't get into an Ivy League university, so you'll be a failure!".
I'm still not a fan of Electron apps. I'd rather they at the very least be offline Chrome apps, so then I'd at least be using the latest Blink/V8 combo. Not to mention it seems that anything Chromium-based likes to make a ton of little writes to disk for just about everything (even when I'm just moving my mouse across the page), so to have multiple apps doing that with their own respective config/cache directories is annoying.
That said, VS Code is the least bad editor running on top of Electron that I've used, so I'm glad this one is based on it. :)
I personally really like the ideas behind Sourcegraph, but as someone who had the extension installed for a long time, until I removed it recently when it got more in the way than it was helpful for the 100th time, all I've seen so far from Sourcegraph has been pretty disappointing.