There are information security guides for different audiences around, including EFF's Surveillance Self-Defense and Email Self-Defense, NIST's Cybersecurity. But I fail to find concise, relatively general, and sensible guidelines aiming personal information security and information literacy to refer people to, so I wrote down the suggestions I would normally share.
I am not a security expert, but a programmer and a small-scale system administrator paying attention to security. So it is a good idea to consider these suggestions critically, just as any others, but I think that they will improve the average state of such guides.
Question things, do not trust blindly, require evidence and verifiability of claims, check those, do not share personal information or give away control without a good reason to, assume that "anything that can go wrong will go wrong" (Murphy's law). That is, employ scientific and engineering approaches, and try to stay honest: do not nudge things to look better (e.g., more trustworthy or certain) than they are; better to err on the side of safety, assuming that they may be worse than they seem. A lack of understanding makes one vulnerable to deception, so study the relevant subjects: how computers, banks, online stores, governments and scammers work, how software and relevant systems are developed, how the research used by those is done. Computing context is a part of it. Try to avoid fallacies and cognitive biases, as they tend to be exploited by adversaries.
Conversely, when providing a service, publishing software, asking for information, sharing information or software, it is nice to make it easy for others to follow that: provide references, evidence, source code, explain why the requested information is required (and ensure that it actually is required); generally, do not ask to believe or trust blindly, do not encourage and normalize dangerous practices.
And as with any other pursuit, give it a try, do not give up, do not view it as "all or nothing": learning a little, paying some attention to security, and avoiding some of the potential losses that way is already better than being successfully attacked all the time.
Information security includes a few areas, but personal security usually revolves around privacy and confidentiality. Some of the common threat actors targeting individuals are scammers, oppressive governments, and thrill seekers. All those seem to be commonly underestimated: scammers' victims think that they cannot be scammed, and are surprised afterwards; thrill seekers are often neglected because "why would anybody want to do that?"; governments are often ignored because of one's political views (loyalty to the regime, beliefs that it will not turn authoritarian, is not authoritarian even after it turned so, abandoning presidential term limits, introducing numerous censorship laws and persecution of dissent, and so on; belief that they will not reach you) or learned helplessness. "I have nothing to hide" is another common sentiment, often extended to the private information of one's friends and family that they possess, useful to threat actors. That usually implies a certainty that the government is on your side and will stay that way, in addition to one's immunity to the other risks. And then there are the likes of "the world is just, I am good, so nothing bad can happen to me"; a variety of denial strategies and excuses, religious beliefs.
Entities collecting information, even if they do not use it against you intentionally and immediately, may also be viewed as threats, since they tend to leak it via data breaches, or to abuse it themselves later. Those include commercial companies, government organizations, and individuals.
People may also engage in a crime of opportunity if the conditions for that are created: e.g., someone picking up or buying a discarded unencrypted storage device may access (recover) the private data stored on it. Same with information made available online: apparently even IT professionals manage to accidentally allow unauthenticated access to databases quite regularly, making it a common source of data breaches.
The principle of least privilege is generally useful: share the minimum required information to receive a service, or give minimal required and controlled access to your system. E.g., buying most items, using most public transport, or visiting most public places should not require identifying yourself: doing so imposes an unnecessary risk. Likewise with running custom software to access online services, especially if it is closed-source (and possibly proprietary), so you cannot (and possibly not allowed to by the license) check what it is doing with your system. Communicating over the Internet does not require to provide your full name, phone number, or to identify yourself at all. Identifying yourself by sending pictures of documents is one of the sillier and dangerous practices. Software should not run with superuser (root) privileges, and generally the usual security mechanisms must not be bypassed, unless there is a good reason to.
If someone asks you to take unnecessary risks like that, that
itself is a cause for suspicion, and to look for other
options. Often it involves accepting inconveniences (such as
visiting places and standing in queues instead of using
proprietary software, dealing with paper documents, possibly
with cash, missing some online conversations), resisting peer
pressure (e.g., "just set a sensible password like 1234",
"install our software with curl | sh
and run its
custom updater to be up to date", "let's run everything as root
to avoid dealing with permissions").
If the private information is not requested by a service, or superuser privileges are not requested by software, it is safest to not volunteer to provide those: e.g., use screen names for online services and as a system user name (which is used as the default name for information sent online occasionally: the best way to ensure that the real name is not leaked accidentally is to never enter it), use dedicated system users or sandboxing facilities to run programs.
Cryptography provides useful tools, perhaps encryption being the most notable one, useful for personal data storage (including encrypted backups), as well as for communication (over email or instant messengers, such as XMPP), and for channel security (for network connections). Another common use of cryptography is for data integrity checks.
Following general advice given above, one should look for trustworthy (transparent, verifiable, openly developed) tools, ideally using free and open-source software exclusively, retrieving it from trusted sources (such as operating system's repositories, where the packages are signed), preferably checking the code, but at least preferring the tools used and inspected by many.
I personally use mostly LUKS for disk encryption and OpenPGP for file and mail encryption and signing, on a Debian system. And TLS, SSH, IPsec, Wireguard for channel security. Those are widely available, well-known tools.
The usage of LUKS with cryptsetup(1)
is described
in the personal data storage notes linked above, while that of
OpenPGP is described in GnuPG's user guides; it is supporetd out
of the box in mail clients such as mu4e (an Emacs client), mutt
(a standalone TUI client), Thunderbird (a standalone GUI
client), and the GnuPG's gpg(1)
command-line tool
is fairly easy to use. For email, one may want to ensure that
the messages are encrypted not just for recipients, but also for
the sender, so that the sender can read them later: mutt does it
by default (the pgp_self_encrypt
option), for mu4e
one should enable it
in mml-secure-openpgp-encrypt-to-self
.
There are endless alternatives, which tend to incorporate newest and shiniest algorithms (which is dangerous by itself: better to stick to heavily analyzed ones), to be written in this month's most trendy language (possibly to be abandoned soon), clean of the backwards--or standards--compatibility cruft accumulated by older tools, and supposedly easier to use, providing fun colors and supportive emojis. Some also like to write their own software, but there are many gotchas and cryptographic attacks that basic algorithm descriptions do not mention, which may easily compromise the system. Both scammers and governments like to advertise malware as security software, occasionally to disguise attacks as security measures. While more legitimate commercial companies tend to sell virtually useless security products, but not necessarily malware: perhaps more of placebo. Security theater is a shady practice along those lines.
There are minor tactics and useful habits, some of which can be described as simply common sense:
xkcdpass
), do not reuse those across
services, maybe do not reuse logins and other identifying
information, either. That may include things like the IP
address, web browser fingerprints, and so on.
Ensuring secure practices can be interesting and fun, and one may be enthusiastic about it, which helps to follow them. Then it is tempting to share that with others, improve their security practices, which is what I am trying to do by writing this. But keep in mind that people may simply not care about it, as many do not care about their health enough to take care of it, of environment (ecology, as well as politics), of self-improvement, and of a variety of other topics that yet others do care about. Even among those who do care about information security, the threat models and views on ways to achieve it may differ considerably, also as with the other mentioned topics. And it can be difficult to idly observe people you care about doing what you think is bad for them. I think a fine balance between being unhelpful and annoying is to let people know that you are willing to help, to answer and explain things when asked to, but not to try to force those onto others. And maybe to work on useful tools, infrastructure, and documentation in order to satisfy the impulses to share and help, as well as to learn more in the process.
The same principles apply to information security in organizations, when setting company's servers or developing enterprise software. Just as with software and hardware generally. There may be more bureaucratic approaches (with occasional checklists for compliance checks), scales are different, NIST's frameworks are more useful there, but it is basically the same thing.