Difficult decisions

1. Think in Years, Not Days

Thee most successful people, Banfield found, “are intensely future-oriented. They think about the future most of the time,” rather than thinking only of the next few hours or even minutes

2. Understand the Effects of Decision Fatigue

3. Cut down on the number of decisions you have to make each day

4. Consider the Opposite

5. Stay away from the ‘What if’ game

The bottom line of decision making involves determining which potential decision will offer the best possible outcome based on what we know now.

Good decisions don’t ensure success but bad ones almost always ensure failure.

From https://zapier.com/blog/difficult-decisions/

Signs of a bad programmer

​If your skills deficiency is a product of ineffective teaching or studying, then an alternative teacher is the compiler itself. There is no more effective way of learning a new programming model than starting a new project and committing yourself to use whatever the new constructs are, intelligently or not.

… a good programmer will search for a built-in function that does what they need before they begin to roll their own, and excellent programmers have the skill to break-down and identify the abstract problems in their task, then search for existing frameworks, patterns, models and languages that can be adapted before they even begin to design the program.

… you must have discipline. Being aware of flaws in your plan will not make you more productive unless you can muster the willpower to correct and rebuild what you’re working on.

Imagine your program’s input is water. It’s going to fall through every crack and fill every pocket, so you need to think about what the consequences are when it flows somewhere other than where you’ve explicitly built something to catch it.

From http://www.yacoset.com/Home/signs-that-you-re-a-bad-programmer

Turning XSD into documentation file

This XSD stuff is not for me, f*cking unreadable :/ Thanks god there is a stylesheet that makes it prettier, like a documentation.
First download it http://www.w3.org/2008/09/xsd.xsl due browser cross domain restriction, so you can avoid (fastest solution for me now)

Unsafe attempt to load URL https://www.w3.org/2008/09/xsd.xsl from frame with URL http://localhost/kovoinox/temp/stock.xsd. Domains, protocols and ports must match.

After download put it next to the XSD file and insert the following line as the 2nd row in the XSD:

<?xml version="1.0" encoding="Windows-1250"?>
<?xml-stylesheet type="text/xsl" href="xsd.xsl"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
 ... the rest of the XSD ...

After opening the XSD in the browser you get a much nicer interpretation.

ss_2016-08-03-16-16-23

Solution from http://stackoverflow.com/questions/6686124/how-to-turn-xsd-files-into-documentation-file

Link checking as cache warmup or integration testing

Sometimes I use wget awesome recursive spidering (crawling) feature (alternative to linkchecker) beside broken link check also for cache warmup or looking for PHP errors after commit.

On production for some projects I use PHP error logging into separate log directory per day:

if (!is_dir(DOC_ROOT.'/log')) {
    mkdir(DOC_ROOT.'/log');
}

ini_set('display_errors', 0);
ini_set('display_startup_errors', 0);
ini_set('log_errors', 1);
ini_set('error_log', DOC_ROOT.'/log/php-error-'.date('d').'.log');

So after crawling I check if there is a log file.
And the wget shell script that crawls the site is:

#!/bin/bash

timestamp=$(date +"%Y%m%d%H%M%S")

cd /tmp
time wget -4 --spider -r --delete-after \
     --no-cache --no-http-keep-alive --no-dns-cache \
     -U "Wget" \
     -X/blog \
     http://www.example.com -o"/tmp/wget-example-com-$timestamp.log"

During the crawling your site is hit and cache is generated if you have some implemented, for example phpFastCache.
I just only crawl via IPv4 without HTTP keep alive (only for better performance). Setting some unique user agent is good for parsing in access_log, too.
-X stands for excluding, also handy for improving performance on large sites.
-o outputs the wget status report where you can search for HTTP status codes as 404, 403, 500, etc.
Remember the excluded path, you might run another wget in parallel if you need. Unfortunately wget can’t run parallel threads as of writing.