Compare commits

...

31 commits

Author SHA1 Message Date
326b65f507
Elaborate on the iterative process for refactor scripts 2026-04-20 00:26:12 +03:00
d22b10a8ab
Teach Claude about the Three Virtues 2026-04-20 00:26:11 +03:00
d7e16e108d
Permit Claude to restructure automatically-committed history 2026-04-20 00:26:11 +03:00
d9d1ef0878
Integrate git into Claude's development process 2026-04-20 00:24:14 +03:00
6029206d1b
Add guidelines for naming and magic numbers in tests 2026-04-20 00:24:14 +03:00
575976e578
Invite creative writing in source code 2026-04-20 00:24:14 +03:00
2be49fd182
Describe the preferred setup of green-field projects 2026-04-20 00:24:13 +03:00
ff76aaf525
Clarify the role of integration testing for code that requires I/O 2026-04-20 00:24:13 +03:00
d3c941a8c1
Elaborate on outlier cases in the testing instructions 2026-04-20 00:24:12 +03:00
48c5b4e00e
Add Gemma 4 to the model library 2026-04-20 00:24:12 +03:00
69d53d4d8a
Increase the cache TTL for loaded models 2026-04-20 00:24:11 +03:00
4dd9795b0b
Mirror resource limits in podman as well 2026-04-20 00:24:11 +03:00
5910825ca9
Restrict service resource usage 2026-04-20 00:24:10 +03:00
8c53e540d8
Ignore local Claude Code files globally 2026-04-20 00:24:10 +03:00
5e559aa2a9
Use a widely-available terminal config in SSH remotes 2026-04-20 00:24:09 +03:00
e72fe95497
Create periodic healthcheck units for the transient store 2026-04-20 00:24:09 +03:00
bebbefcda7
Support health checks for the services 2026-04-20 00:24:09 +03:00
7be6bccbc9
Check for image updates on startup 2026-04-20 00:24:08 +03:00
36a16cc1dd
Only expose access ports on the localhost network 2026-04-20 00:24:08 +03:00
2a383a6c3c
Restrict service container privileges 2026-04-20 00:24:07 +03:00
ce0be360e5
Install SimpleX Chat from the GitHub repository 2026-04-20 00:24:07 +03:00
4bd68f4614
Short-circuit installation commands on failure 2026-04-20 00:24:06 +03:00
87c66ec157
Use hardened defaults for SSH connections 2026-04-20 00:24:06 +03:00
340b20f39e
Sort keybindings 2026-04-20 00:24:05 +03:00
31106b267d
Back up the password databases 2026-04-20 00:24:05 +03:00
34bf948ef1
Update the script to work in strict mode 2026-04-20 00:24:05 +03:00
59d1f1d22d
DRY the sync backup script 2026-04-20 00:24:04 +03:00
bec062d420
Install a CLI tool for Hetzner Cloud 2026-04-20 00:24:04 +03:00
d5f4376125
Provide installation suggestions for missing utilities 2026-04-20 00:24:03 +03:00
c9c788efe3
Correctly update executable links 2026-04-20 00:24:03 +03:00
fc840dd54e
Set core rules for Claude Code behavior 2026-04-20 00:24:02 +03:00
24 changed files with 225 additions and 31 deletions

View file

@ -1,5 +1,47 @@
# Project Instructions # Project Instructions
## Core Rules
- When asked to do ONE thing, do exactly that. Do not proactively migrate dependencies, refactor adjacent code, or expand scope. You may suggest further edits, but wait for confirmation before any scope expansion.
- Prefer the simplest, most localized solution. Changes should target the most-relevant section of code — for example, catch errors in the scope that best handles them rather than injecting data up or down the stack. Take time to think about the best approach rather than quickly jumping to an implementation.
## The Three Virtues
Cultivate the three virtues of a great programmer — **Laziness**, **Impatience**, and **Hubris** — in the spirit of Larry Wall's original formulation. The first virtue deserves particular emphasis.
**Laziness** means going to great lengths to reduce total effort, especially by investing upfront in tools that replace repetitive manual work. It is rare for a large change to be so heterogeneous that no aspect of it can be generalized.
### Automating Sweeping Changes
For large-scale refactors, strongly prefer a script over repetitive manual edits. Two, three, or eight similar changes are fine to do by hand, but 50 similar one-line edits should be automated. This holds even when the script ends up longer or more complicated than the diff it produces.
The preferred workflow:
1. Craft a utility that applies the transformation. Pick the tool best suited to the specific shape of the change — a regex swap for simple textual substitutions, an AST-based rewrite when the change depends on syntactic structure, purpose-built refactoring libraries (e.g. `libcst`, `jscodeshift`, `gofmt -r`, `comby`) when available, or any combination thereof.
2. Commit the script on its own.
3. Run it and commit the results as a separate commit.
If inspecting the results reveals missed cases or inappropriate changes, use git to clean up (`git restore`, `git reset --hard` against the pre-script commit), then improve the script. Commit improvements as follow-ups, or amend the script's commit for trivial bug fixes, and re-run.
Continue iterating until:
- **(a) No incorrect changes are included in the diff.** This is a hard requirement — a script that produces even one wrong edit is not done.
- **(b) Either no missing cases remain, or the remaining cases are few and share too little in common for their inclusion in the script to be easier to validate than just editing them by hand.** When a handful of outlier cases survive the automated pass, handle them in a dedicated commit, separate from both the script-creation commit and the script-execution commit.
### Why This Matters for Review
A refactor script plus a spot-check of its output is far easier to review than a diff of 50 manual edits. The reviewer can be convinced of the logical correctness of the script and sample its results, rather than verifying every edit individually and separately confirming that no instances were accidentally skipped.
### Beyond Refactoring
The same principle generalizes across many aspects of coding. Test suites can and should be easier to reason about than the code they exercise. Formal models that can be mechanically verified are usually more concise than the code implementing them. A performance benchmark provides an explicit representation of the runtime metric that optimizations only improve implicitly.
The common theme: **treat tool creation and tool use as a materialized, first-class part of development — not just ephemeral scratch work during your own process.** Show your work and publish it into the history.
## Tool Usage Preferences
For simple factual lookups (package versions, release dates), use targeted, purpose-built commands and local CLI tools first before attempting web searches — e.g. `pip index versions <pkg>` for Python, `npm view <pkg> versions` for Node. Prefer fast local approaches over web research.
## Container Environment (Podman) ## Container Environment (Podman)
This environment runs inside a container with access to a Podman socket shared from the host. There is no `docker` or `podman` CLI available, but you can interact with containers via the Docker-compatible API. This environment runs inside a container with access to a Podman socket shared from the host. There is no `docker` or `podman` CLI available, but you can interact with containers via the Docker-compatible API.
@ -31,6 +73,10 @@ Style preferences (when not conflicting with existing patterns):
- Avoid mutation of inputs - Avoid mutation of inputs
- Pure functions where practical - Pure functions where practical
**Fun is welcome in moderation.** Clarity and readability come first, but the occasional reference, joke, or creative naming makes code more enjoyable to read and write. The key constraint: it must be apropos to the actual code — no random remarks. A comment that winks at a known falsehood the code knowingly embraces, or a function name that doubles as a cultural reference while accurately describing its behavior, are both fair game. Keep it sparse; if every function has a quip, none of them land.
**Naming in test code** has different rules than implementation code. Implementation names must always be meaningful and reflective of purpose. Test code, however, may use metasyntactic variables when a value is arbitrary and meaningfulness would be misleading. Preferred metasyntactic names: `foo`, `bar`, `baz`, `frob`, `xyzzy`, and conjugations of `frobnicate`. For arbitrary magic numbers, prefer values with clean ternary representations (e.g., 72 or 243 over 128 or 255 when a test needs a fixed byte value).
**Style changes should be separate from implementation.** If you notice style inconsistencies or want to improve patterns, do so in dedicated refactor commits or branches rather than mixing with feature work. **Style changes should be separate from implementation.** If you notice style inconsistencies or want to improve patterns, do so in dedicated refactor commits or branches rather than mixing with feature work.
## Test Coverage ## Test Coverage
@ -46,6 +92,18 @@ Why this matters:
When adding or modifying code, verify that tests cover the new logic. If coverage drops, add tests before merging. When adding or modifying code, verify that tests cover the new logic. If coverage drops, add tests before merging.
### Coverage Exclusions and Test Quality
**Pure I/O code is excluded from coverage requirements.** Code whose sole purpose is performing I/O (reading files, making network calls, rendering output) cannot be effectively tested without manual interaction. However, this has a direct design implication: keep the I/O layer as thin and trivial as possible. All business logic, validation, transformation, and decision-making must live in testable modules that the I/O layer merely calls into. A fat I/O layer is a design smell, not an excuse for missing tests.
**The value of integration testing for I/O is context-dependent** — it depends on whether I/O is incidental to the component or central to its purpose.
When I/O is incidental (e.g., an application that loads configuration from a file), there is no value in testing the file-reading call itself — trust the language's I/O primitives. Instead, feed raw data to a pure function that handles parsing and validation. In some cases even parsing tests may be unnecessary, such as a JSON config file loaded via a standard-library routine that directly constructs application-defined structs. Structure such code to confine I/O in a short routine that can be excluded from coverage.
When I/O *is* the core business logic (e.g., a database engine or FUSE filesystem), it must be thoroughly integration-tested against a functioning backend. The I/O layer cannot be excluded here because it is the component's reason for existing. Provision appropriate test infrastructure: a tmpfs filesystem for storage-centric tests, an Alpine testcontainer for cases that need to exercise interactions between different user permissions, or an emulated service with reliable compatibility to the real target (e.g., MinIO via testcontainers for S3-dependent code). This is preferable to either mocking away the I/O (which hides real failure modes) or leaving the logic untested.
**Tests must exercise actual code paths, not reproduce them.** In rare cases, code is so trivial that the only apparent way to test it is to restate it in the test. Such tests verify nothing — they pass by construction and remain passing even when the code changes, which demonstrates that they provide no actual validation. Do not write these. Instead, explicitly exclude the code from coverage. Note that this situation is rare and usually signals a design gap (logic that should be extracted or combined with something more substantive) rather than inherent untestability.
## CLI Style ## CLI Style
**Prefer long option names over short ones** in command-line applications and examples. **Prefer long option names over short ones** in command-line applications and examples.
@ -59,3 +117,45 @@ command -v -o file.txt
``` ```
Long options are self-documenting and make scripts and examples easier to understand without consulting help text. Short options are acceptable for interactive use but should not appear in committed code, documentation, or examples. Long options are self-documenting and make scripts and examples easier to understand without consulting help text. Short options are acceptable for interactive use but should not appear in committed code, documentation, or examples.
## Git Workflow
Assume you are working in a git repository. Partition changes into small, self-contained commits and commit each before proceeding to the next change. When enacting a plan, a single action item will often span several such commits — that is expected and preferred over bundling unrelated changes together.
Leverage the git history during development as well. Git enables efficient and reliable rollbacks of recent changes or research dead ends, and clean reverts of specific diffs from earlier in the history. Prefer these over manual cleanup.
### Automated Safety-Net Commits
An automated cron job periodically sweeps every repository and creates a single timestamped commit containing all outstanding changes — staged, unstaged, and untracked. Its sole purpose is to reduce the chance of losing useful work during prolonged writing and revising sessions; it is not meant to produce the final shape of the history.
When you go to commit your own work and find that the cron job has beaten you to it, treat those commits as raw material to be restructured. Use `git commit --amend`, `git reset --soft`, or non-interactive rebases (`git rebase --exec`, `git rebase --onto`, scripted `git-filter-repo` passes, etc.) to split, recombine, and re-message them into meaningful, atomic steps of work.
Although the cron job occasionally groups together unrelated changes that you would commit separately, the chronological order of the automated commits usually correlates well with the content. Use judgement about whether it is cleaner to soft-reset the entire batch in one go and rebuild the history from scratch, or to replace it piecewise while preserving the commits that are already well-scoped.
## Green-Field Project Setup
When setting up a new project, code-quality and developer-experience tooling must be included from the start and integrated into the development workflow. The principles below use Python as a concrete example, but apply generally to any language ecosystem.
### Python Tooling
Use **uv** to manage dependencies and create the project virtual environment. All work must be performed inside the venv. Additionally, install and configure the **pre-commit** hook manager with a baseline DevEx toolset:
- **ruff** — linting and formatting
- **mypy** — static type checking
- **tach** — structural/dependency boundary checks
Configure all tools for their strictest check levels by default. Include a `py.typed` marker file in every package to signal PEP 561 compliance.
### Line Length
Do not manually break lines to conform to a line-length limit. Automated code formatters (ruff, gofmt, etc.) handle this for source code. Write unbroken lines in text and Markdown files (e.g., README.md) as well. This also applies to one-off files outside of a project context.
### Licensing (REUSE)
In all projects, install a **pre-commit hook for the REUSE tool** to lint licensing information and ensure every file has correct SPDX headers.
Default license assignments:
- **GPL-3.0-or-later** — source code files in coding projects
- **CC-BY-SA-4.0** — documentation files (README, user guides, etc.); also the default project license for non-coding projects
- **CC0-1.0** — project configuration files (e.g., `pyproject.toml`, `tach.toml`) and small utility scripts or Makefiles that are not core to the implemented logic

View file

@ -5,11 +5,16 @@ Description=A local LLM server
# keep-sorted start # keep-sorted start
AutoUpdate=registry AutoUpdate=registry
ContainerName=ollama ContainerName=ollama
Environment=OLLAMA_KEEP_ALIVE=10m DropCapability=ALL
Environment=OLLAMA_KEEP_ALIVE=30m
HealthCmd=ollama list
# HealthInterval=30s
# HealthStartPeriod=15s
Image=docker.io/ollama/ollama:latest Image=docker.io/ollama/ollama:latest
Network=ollama.network Network=ollama.network
PodmanArgs=--transient-store NoNewPrivileges=true
PublishPort=11434:11434 PodmanArgs=--pull=newer --transient-store
PublishPort=127.0.0.1:11434:11434
ReadOnly=true ReadOnly=true
Volume=%h/.local/share/ollama:/root/.ollama:ro,z Volume=%h/.local/share/ollama:/root/.ollama:ro,z
# keep-sorted end # keep-sorted end

View file

@ -5,10 +5,12 @@ Description=A local PlantUML server
# keep-sorted start # keep-sorted start
AutoUpdate=registry AutoUpdate=registry
ContainerName=plantuml ContainerName=plantuml
DropCapability=ALL
Image=docker.io/plantuml/plantuml-server:jetty Image=docker.io/plantuml/plantuml-server:jetty
Network=private Network=private
PodmanArgs=--transient-store NoNewPrivileges=true
PublishPort=8080:8080 PodmanArgs=--cpus 1 --memory 1g --pull=newer --transient-store
PublishPort=127.0.0.1:8080:8080
ReadOnly=true ReadOnly=true
# keep-sorted end # keep-sorted end
@ -16,4 +18,8 @@ ReadOnly=true
WantedBy=default.target WantedBy=default.target
[Service] [Service]
# keep-sorted start
CPUQuota=100%
MemoryMax=1G
Restart=always Restart=always
# keep-sorted end

View file

@ -7,12 +7,15 @@ AutoUpdate=registry
ContainerName=transmission ContainerName=transmission
Environment=PGID=1000 Environment=PGID=1000
Environment=PUID=1000 Environment=PUID=1000
HealthCmd=curl --fail --silent http://localhost:9091/
# HealthInterval=30s
# HealthStartPeriod=30s
Image=lscr.io/linuxserver/transmission:latest Image=lscr.io/linuxserver/transmission:latest
Network=private Network=private
PodmanArgs=--transient-store PodmanArgs=--cpus 2 --memory 512m --pull=newer --transient-store
PublishPort=127.0.0.1:9091:9091
PublishPort=51413:51413 PublishPort=51413:51413
PublishPort=51413:51413/udp PublishPort=51413:51413/udp
PublishPort=9091:9091
ReadOnly=true ReadOnly=true
UserNS=keep-id UserNS=keep-id
Volume=%h/.config/transmission:/config:Z Volume=%h/.config/transmission:/config:Z
@ -25,10 +28,12 @@ WantedBy=default.target
[Service] [Service]
# keep-sorted start # keep-sorted start
CPUQuota=200%
ExecStartPre=mkdir --parents %h/.config/transmission ExecStartPre=mkdir --parents %h/.config/transmission
ExecStartPre=mkdir --parents %h/Downloads/transmission ExecStartPre=mkdir --parents %h/Downloads/transmission
ExecStartPre=mkdir --parents %h/Downloads/transmission/complete ExecStartPre=mkdir --parents %h/Downloads/transmission/complete
ExecStartPre=mkdir --parents %h/Downloads/transmission/incomplete ExecStartPre=mkdir --parents %h/Downloads/transmission/incomplete
ExecStartPre=mkdir --parents %h/Downloads/transmission/watch ExecStartPre=mkdir --parents %h/Downloads/transmission/watch
MemoryMax=512M
Restart=always Restart=always
# keep-sorted end # keep-sorted end

View file

@ -30,10 +30,14 @@
(use-package emacs (use-package emacs
:ensure nil :ensure nil
:bind (("C-z" . nil) :bind (
("C-z i" . find-init-file) ("C-z" . nil)
;; keep-sorted start
("C-z f" . ffap) ("C-z f" . ffap)
("C-z u" . insert-uuid4-at-point)) ("C-z i" . find-init-file)
("C-z u" . insert-uuid4-at-point)
;; keep-sorted end
)
:hook ( :hook (
;; keep-sorted start ;; keep-sorted start
(after-save . executable-make-buffer-file-executable-if-script-p) (after-save . executable-make-buffer-file-executable-if-script-p)

View file

@ -11,6 +11,14 @@
:context-window 128 :context-window 128
:cutoff-date "2024-08" :cutoff-date "2024-08"
) )
(
gemma4:latest
:description "A model from Google built on Gemini technology"
:capabilities (media tool-use cache)
:mime-types ("image/bmp" "image/gif" "image/jpeg" "image/png" "image/tiff" "image/webp")
:context-window 128
:cutoff-date "2025-01"
)
( (
hf.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:latest hf.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:latest
:description "Uncensored model based on Llama-3.1-8b-Instruct" :description "Uncensored model based on Llama-3.1-8b-Instruct"

View file

@ -15,6 +15,7 @@ DEB_PKGS=(
borgbackup borgbackup
build-essential build-essential
catatonit catatonit
command-not-found
curl curl
default-jdk default-jdk
direnv direnv
@ -34,6 +35,7 @@ DEB_PKGS=(
graphviz graphviz
grim grim
guile-3.0 guile-3.0
hcloud-cli
htop htop
imagemagick imagemagick
inkscape inkscape

View file

@ -5,6 +5,8 @@ IFS=$'\n\t'
# keep-sorted start # keep-sorted start
systemctl --user enable --now backup.timer systemctl --user enable --now backup.timer
systemctl --user enable --now podman-healthcheck@ollama.timer
systemctl --user enable --now podman-healthcheck@transmission.timer
systemctl --user enable --now sync-backup.timer systemctl --user enable --now sync-backup.timer
systemctl --user enable --now sync-git-repos.timer systemctl --user enable --now sync-git-repos.timer
# keep-sorted end # keep-sorted end

View file

@ -0,0 +1,6 @@
[Unit]
Description=Podman health check for %i
[Service]
Type=oneshot
ExecStart=podman --transient-store healthcheck run %i

View file

@ -0,0 +1,11 @@
[Unit]
Description=Podman health check timer for %i
BindsTo=%i.service
After=%i.service
[Timer]
OnActiveSec=30s
OnUnitActiveSec=30s
[Install]
WantedBy=%i.service

View file

@ -20,3 +20,5 @@
# keep-sorted end # keep-sorted end
[include] [include]
path = .hostgitconfig path = .hostgitconfig
[core]
excludesfile = ~/.gitignore_global

3
.gitignore_global Normal file
View file

@ -0,0 +1,3 @@
/conversation-id.txt
/conversation-id-*.txt
/.claude/settings.local.json

View file

@ -5,8 +5,11 @@ IFS=$'\n\t'
export BORG_REPO="/media/backup/" export BORG_REPO="/media/backup/"
if [ "$1" = "service" ]; then if [ "${1:-}" = "service" ]; then
rclone sync "${BORG_REPO}" gdrive-backup:hot-repo/ extra_args=()
else else
rclone sync --progress "${BORG_REPO}" gdrive-backup:hot-repo/ extra_args=(--progress)
fi fi
rclone sync "${extra_args[@]}" "${BORG_REPO}" gdrive-backup:hot-repo/
rclone copy "${extra_args[@]}" ~/.keys/ --include '*.kdbx' gdrive-backup:keys/

View file

@ -11,8 +11,8 @@ dolt_resource() {
} }
install_dolt() { install_dolt() {
tar xz --directory="$(systemd-path user-binaries)" --strip-components=2 dolt-linux-amd64/bin/dolt tar xz --directory="$(systemd-path user-binaries)" --strip-components=2 dolt-linux-amd64/bin/dolt && \
chmod 550 "$(systemd-path user-binaries)"/dolt chmod 550 "$(systemd-path user-binaries)"/dolt
} }
github_update "${package}" "${repo}" dolt_resource install_dolt github_update "${package}" "${repo}" dolt_resource install_dolt

View file

@ -20,7 +20,7 @@ install_fstar() {
rm --force --recursive "${INSTALL_DIR}" && \ rm --force --recursive "${INSTALL_DIR}" && \
mv "${tempdir}"/fstar "$(dirname "${INSTALL_DIR}")" && \ mv "${tempdir}"/fstar "$(dirname "${INSTALL_DIR}")" && \
rm --force --recursive "${tempdir}" && \ rm --force --recursive "${tempdir}" && \
ln --symbolic "${INSTALL_DIR}"/bin/fstar.exe "$(systemd-path user-binaries)"/fstar.exe ln --force --symbolic "${INSTALL_DIR}"/bin/fstar.exe "$(systemd-path user-binaries)"/fstar.exe
} }
github_update "${package}" "${repo}" fstar_resource install_fstar github_update "${package}" "${repo}" fstar_resource install_fstar

View file

@ -11,8 +11,8 @@ kingfisher_resource() {
} }
install_kingfisher() { install_kingfisher() {
tar xz --directory="$(systemd-path user-binaries)" kingfisher tar xz --directory="$(systemd-path user-binaries)" kingfisher && \
chmod 550 "$(systemd-path user-binaries)"/kingfisher chmod 550 "$(systemd-path user-binaries)"/kingfisher
} }
github_update "${package}" "${repo}" kingfisher_resource install_kingfisher github_update "${package}" "${repo}" kingfisher_resource install_kingfisher

View file

@ -11,10 +11,10 @@ minikube_resource() {
} }
install_minikube() { install_minikube() {
tempfile="$(mktemp)" tempfile="$(mktemp)" && \
cat - > "${tempfile}" cat - > "${tempfile}" && \
chmod 550 "${tempfile}" chmod 550 "${tempfile}" && \
mv "${tempfile}" "$(systemd-path user-binaries)"/minikube mv "${tempfile}" "$(systemd-path user-binaries)"/minikube
} }
github_update "${package}" "${repo}" minikube_resource install_minikube github_update "${package}" "${repo}" minikube_resource install_minikube

View file

@ -11,10 +11,10 @@ rust_analyzer_resource() {
} }
install_rust_analyzer() { install_rust_analyzer() {
tempfile="$(mktemp)" tempfile="$(mktemp)" && \
gunzip --to-stdout - > "${tempfile}" gunzip --to-stdout - > "${tempfile}" && \
chmod 550 "${tempfile}" chmod 550 "${tempfile}" && \
mv "${tempfile}" "$(systemd-path user-binaries)"/rust-analyzer mv "${tempfile}" "$(systemd-path user-binaries)"/rust-analyzer
} }
github_update "${package}" "${repo}" rust_analyzer_resource install_rust_analyzer github_update "${package}" "${repo}" rust_analyzer_resource install_rust_analyzer

View file

@ -0,0 +1,20 @@
#! /usr/bin/bash
set -euo pipefail
IFS=$'\n\t'
package=simplex-chat
repo=simplex-chat/simplex-chat
sc_resource() {
echo "simplex-chat-ubuntu-24_04-x86_64"
}
install_sc() {
tempfile="$(mktemp)" && \
cat - > "${tempfile}" && \
chmod 550 "${tempfile}" && \
mv "${tempfile}" "$(systemd-path user-binaries)"/simplex-chat
}
github_update "${package}" "${repo}" sc_resource install_sc

View file

@ -19,7 +19,7 @@ install_tlapm() {
rm --force --recursive "${INSTALL_DIR}" && \ rm --force --recursive "${INSTALL_DIR}" && \
mv "${tempdir}"/tlapm "$(dirname "${INSTALL_DIR}")" && \ mv "${tempdir}"/tlapm "$(dirname "${INSTALL_DIR}")" && \
rm --force --recursive "${tempdir}" && \ rm --force --recursive "${tempdir}" && \
ln --symbolic "${INSTALL_DIR}"/bin/tlapm "$(systemd-path user-binaries)"/tlapm ln --force --symbolic "${INSTALL_DIR}"/bin/tlapm "$(systemd-path user-binaries)"/tlapm
} }
github_update "${package}" "${repo}" tlapm_resource install_tlapm 1.6.0-pre github_update "${package}" "${repo}" tlapm_resource install_tlapm 1.6.0-pre

View file

@ -11,10 +11,10 @@ uv_resource() {
} }
install_uv() { install_uv() {
tempdir="$(mktemp --directory)" tempdir="$(mktemp --directory)" && \
tar xz --directory="${tempdir}" --strip-components=1 && \ tar xz --directory="${tempdir}" --strip-components=1 && \
chmod 550 "${tempdir}"/uv "${tempdir}"/uvx && \ chmod 550 "${tempdir}"/uv "${tempdir}"/uvx && \
mv --force "${tempdir}"/uv "${tempdir}"/uvx "$(systemd-path user-binaries)" mv --force "${tempdir}"/uv "${tempdir}"/uvx "$(systemd-path user-binaries)"
} }
github_update "${package}" "${repo}" uv_resource install_uv github_update "${package}" "${repo}" uv_resource install_uv

1
.ssh/config Normal file
View file

@ -0,0 +1 @@
Include ~/.ssh/config.d/*.conf

View file

@ -0,0 +1,14 @@
# SSH client algorithm hardening.
#
# Require PQ-hybrid KEX, AEAD ciphers, Ed25519 keys.
# Applied to all outgoing SSH connections from this machine.
#
# Requires OpenSSH 9.9+ for mlkem768x25519-sha256.
Host *
KexAlgorithms mlkem768x25519-sha256,sntrup761x25519-sha512@openssh.com
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com
HostKeyAlgorithms ssh-ed25519,ssh-ed25519-cert-v01@openssh.com
PubkeyAcceptedAlgorithms ssh-ed25519,ssh-ed25519-cert-v01@openssh.com
RekeyLimit 1G 1h

View file

@ -0,0 +1,2 @@
Host *
SetEnv TERM=xterm-256color