Or refactored at a later date.
Or refactored at a later date.
I don’t mind ads so much. What I don’t want in invasive tracking and collection of every scrap of data they can to push ads on you. Give some dumb ads based on the damned contents of the page and I would be fine. But no, ads is basically a synonym for tracking these days.
Oh they care. They care a lot. Particularly that you don’t have any so they can sell all your details to any bidder.
Plus this applies to your family as well. DNA is shared and by you giving it up you give up info about those related to you as well.
The known unknowns and especially the unknown unknowns never get factored into an estimate. People only ever think about the happy path, if everything goes right. But that rarely every happens so estimates are always widely off.
The book How Big Things Get Done describes a much better way to factor in everything without knowing all the unknowns though - Just look a previous similar projects and look how long they took, take the average and bounds then adjust up or down if you have good reason to do so. Your project will very likely take a similar amount of time if your samples are similar in nature to your current task. And the actual time already factors in all the issues and problems encountered and even if you don’t hit all the same issues your problems will likely take a similar amount of time. And the more previous examples you have the better these estimates get.
But instead of that we just pluck numbers out of the air and wonder why we never hit them.
They only need it to pass once, we need it to be rejected every single time.
And other browsers can be configured to do the same. Though that is not ublock origin doing anything with the cookies and these settings can be enabled wtihout ublock (though you likely want ublock if you are enabling them).
I don’t think it does anything with cookies directly. It just blocks connections to domains and removes elements from pages that match patterns you give it. Removing the cookies/privacy banners does just that - removes the banner. This SHOULD opt you out of tracking as the laws generally require explicit permission, so not clicking the accept button should be enough. But if the sites follow those laws or not is a completely different matter.
Third party tracking cookies are normally blocked by their domain - when a tracking pixel is on the screen it reaches out to a known tracking domain which logs this visit and drops a cookie for that domain on the page. By blocking that domain the tracking request is never made and thus no cookie is dropped and so there is nothing to track you. Most tracking is done like this so it is quite effective. But it wont stop a first party cookie from being dropped or tracking done through that or any other data you send.
Note that the laws don’t require permission for all cookies. Ones that are essential to the sites function (like a cookie that carries login info) are typically allowed and cannot be opted out of (you can always delete cookies locally though, the laws just cover what sites can use). And not all sites will respect these laws or try to skirt around them so none of this is 100% perfect by any means.
This is an absolute terrible post :/ I cannot believe he thinks that is a good argument at all. It basically boils down to:
Here is a new feature modern languages are starting to adopt.
You might thing that is a good thing. Lists various reasonable reasons it might be a good thing.
The question is: Whose job is it to manage that risk? Is it the language’s job? Or is it the programmer’s job?
And then moves on to the next thing in the same pattern. He lists loads of reasonable reasons you might want the feature gives no reasons you would not want it and but says everything in a way to lead you into thinking you are wrong to think you want these new features while his only true arguments are why you do want them…
It makes no sense.
But no one actually pulls that rule through, do they?
They do though. Loads of new people to programming read that book and create unreadable messes of a code base that follow all of his advice. I have lost count of the number of times I have inlined functions, removed layers of abstraction and generally duplicated code to get a actual understanding of what is going on only to realize there is a vastly simpler way to structure the code that I could not see until all the layers and indirection are removed. Then to refactor again to remove redundant code and apply more useful layers again that actually made sense.
And that is the problem we have with his book. People that need it take up as many bad habits as they do good ones leading to an overall decline in their code quality. It is not until years of experience that you can understand the bad bits and ignore them. So overall his book is a net negative on the programming world. Not all his advice is bad, but if you can tell that then you likely don’t need his advice.
But on the layers of abstractions specifically, he takes this too far. Largely because of the 4 line limit he has. There is a good level of abstraction and I generally find more than 2 or 3 levels of abstraction is where I start to loose any sense of what is going on. He always seems to jump on abstraction as soon as he can, but I find waiting a while and abstraction when you need to to lead to fewer and vastly better layers of abstraction overall.
And adding more abstraction does not help the people of people doing too many things inside a function - they just move it to sub functions rather than extracting the behavior for the caller to deal with. I have never seen him give advice on what that is appropriate, only keeps the functionality of the original function the same and move the logic into a nested function instead and that only covers up the issue of the function doing too much.
But I feel like Uncle Bob is leaning more towards that if a task requires 100 different operations, then that should be split into 100 different functions. One operation is one thing. Maybe not exactly, but that’s kind of vibes I get from his examples.
Oh yeah he defiantly does. He even says so in other advice like a function should be about 1-3 lines. Which IMO is just insane advice.
I kinda disagree with him on this point. I wouldn’t necessarily limit to one thing, but I think functions should preferably be minimal.
I do actually agree with him on that point - functions should do one thing. Though I generally disagree on what one thing is. It is a useless vague term and he tends to lean on the smallest possible thing a thing can be. I tend to lean on larger ideas - a function should do one thing, even if that one thing needs 100s of lines to do. Where the line of what one thing is, is a very hard hard idea to define though.
IMO a better metric is that code that changes together should live together. No jumping around lots of functions or files when you need to change something. And split things out when the idea of what they do can be isolated and abstracted away without taking away from the meaning of what the original function was doing. Rather than trying to split everything up in to 1-3 line functions - that is terrible advice.
That should not break things though. Maybe get a merge conflict that you need to sortout at worst. This is essentially the constant state of working with other people on a project.
I have never had git get into a state I cannot get out of. Even if that is a reset, checkout or clean. And those are very rare. How are people breaking things so often.
Learn the tools you use daily, it saves you a lot of headache in thelong term.
It is a digital signage display (ie in store ads, menus, displays etc) - not meant for desktop use.
SIGINT is sent when you press Ctrl+C. SIGTERM is sent in just about every other situation - basically when the system wants the program to end. For instance when systemd wants to stop the service or the default signal with programs like
kill
pkill
htop
etc. You should catch both of these signals.