Since we've had a hardware failure and now a new server, here are some questions to help guide what you want to do next:
sda – 223.6G SSD (likely your system disk)
-
sda1
– 4G (possibly swap or rescue) -
sda2
– 512M (likely /boot or EFI) -
sda3
– 219.1G (probably the main root partition)
-
-
sdb – 1.8T HDD (likely for data)
-
sdb1
– full 1.8T partition
-
π Step 1: Identify the likely root/data partitions
From your output:
-
sda3
– 219.1G – likely the root filesystem -
sdb1
– 1.8T – likely data or a large mount like/home
,/var
, or similar
π Step 2: Mount the root partition
mkdir /mnt/sda3
mount /dev/sda3 /mnt/sda3
ls /mnt/sda3
π Step 3: Mount boot (if needed)
mkdir /mnt/sda3/boot
mount /dev/sda2 /mnt/sda3/boot
π½ Step 4: Mount the data drive
mkdir /mnt/sdb1
mount /dev/sdb1 /mnt/sdb1
ls /mnt/sdb1
π§ Optional: Check filesystem type if mount fails
Sometimes the mount fails if the FS type isn't auto-detected. You can use:
blkid
And try to mount with explicit type, e.g.:
mount -t ext4 /dev/sdb1 /mnt/sdb1
mount /dev/sda3 /mnt/sda3 mount: /mnt/sda3: unknown filesystem type 'linux_raid_member'. dmesg(1) may have more information after failed mount system call.
This means /dev/sda3
is part of a software RAID array — probably managed by mdadm — and not directly mountable. Same might apply to /dev/sdb1
.
✅ Step-by-Step Recovery from Software RAID
Step 1: Check for existing RAID arrays
First, let’s see if any RAID arrays are already detected:
cat /proc/mdstat
If it shows something like md0
, md1
, etc., the RAID was assembled automatically.
π ️ Step 2: Assemble the RAID manually (if not already)
If nothing shows up in /proc/mdstat
, try assembling:
mdadm --assemble --scan
Then check again:
cat /proc/mdstat
If arrays appear (e.g., /dev/md0
), those are your actual mountable devices.
π§Ύ Step 3: List block devices with RAID info
To see RAID members:
mdadm --detail --scan
mdadm --examine /dev/sda3
mdadm --examine /dev/sdb1
π Step 4: Mount the RAID array
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
ls /mnt/md0
π§ Optional: Install mdadm
(if needed)
If the command mdadm
is not found, install it (depends on rescue environment):
apt update && apt install mdadm # Debian/Ubuntu-based
yum install mdadm # RHEL/CentOS-based
cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[1] 4190208 blocks super 1.2 [2/1] [_U] md1 : active raid1 sda2[1] 523712 blocks super 1.2 [2/1] [_U] md2 : active raid1 sda3[1] 229579328 blocks super 1.2 [2/1] [_U] bitmap: 2/2 pages [8KB], 65536KB chunk unused devices: <none>
✅ Only sda
is present in the array — the second disk (sdb
) is not part of the RAID.
✅ What to do now
Let’s try mounting the RAID devices:
πΉ Mount root filesystem (likely md2)
mkdir /mnt/md2
mount /dev/md2 /mnt/md2
ls /mnt/md2
mkdir /mnt/md2/boot
mount /dev/md1 /mnt/md2/boot
Check for kernel files (vmlinuz
, initrd.img
, maybe EFI
folder if UEFI).
πΉ Mount swap (optional, md0)
/dev/md0
is likely swap, no need to mount unless you're doing full recovery.
You can check if it’s swap like this:
file -s /dev/md0
π Also check what’s on /dev/sdb1
Since sdb1
is not in any RAID, it may contain standalone data or a former RAID member:
blkid /dev/sdb1
file -s /dev/sdb1
mkdir /mnt/sdb1
mount /dev/sdb1 /mnt/sdb1
ls /mnt/sdb1
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1.8T 500G 1.3T 28% /mnt/sdb1
✅ Steps to Copy SSD (RAID) to HDD
1. Create the destination folder
mkdir -p /mnt/sdb1/ssd_copy
2. Run the copy in screen
screen -S copy_backup
3. Run the actual copy
cd /mnt/md2
cp -a . /mnt/sdb1/ssd_copy/
Ctrl + A, then D
π§ Optional: Monitor progress
watch -n 5 'du -sh /mnt/sdb1/ssd_copy'
✅ Basic rsync
command to copy everything (with error handling)
rsync -aAXv --progress --inplace --ignore-errors /mnt/md2/ /mnt/sdb1/ssd_copy/
π Explanation:
-
-a
→ archive (preserves everything) -
-A
→ preserve ACLs -
-X
→ preserve extended attributes -
-v
→ verbose (see what’s going on) -
--progress
→ show file progress -
--inplace
→ write directly to destination file (can be safer on low space) -
--ignore-errors
→ keeps going even if there are read/write issuesπ¨ If disk errors persist…
If
dmesg
shows I/O errors, let me know — we might want to remount withro
(read-only), or tryddrescue
to salvage data from a failing disk.Want help checking for I/O errors or logs?
π§ Quick Checklist Before Install
-
What OS do you want to install? (e.g. Debian, Ubuntu, CentOS)
-
Do you want software RAID again? Or go without?
-
Keep using both disks like before? (SSD for system, HDD for data?)
-
Any data to restore from backups (like from
/mnt/sdb1/ssd_copy
)?π ️ General Install Flow (Assuming Debian/Ubuntu-style):
πΉ 1. Boot Into Rescue
Perfect! You're in the Hetzner Rescue System, and it's ready for a fresh OS install using their tool installimage
.
Looks like you have:
-
/dev/sda
– 480 GB SSD -
/dev/sdb
– 480 GB SSD -
/dev/sdc
– 2 TB HDD
⚠️ Disks are currently unpartitioned — so everything will be wiped when you install, as expected.
✅ To Install the OS:
installimage
This launches an interactive TUI that guides you through:
-
Selecting the OS (Ubuntu, Debian, CentOS, etc.)
-
Choosing the disk layout
-
Configuring RAID (optional — typically RAID1 for system SSDs)
-
User setup and SSH keys
π‘ Recommended Setup (based on your hardware):
System disks:
-
Use
/dev/sda
and/dev/sdb
in RAID1 -
OS,
/boot
,/
, and maybe swap here
Data disk:
-
Leave
/dev/sdc
unconfigured during install -
Later format and mount manually (e.g.,
/data
or restore backup)
π§ Suggestion:
Choose Ubuntu-2204-jammy-amd64-base
unless you have a reason to go newer or minimal.
After that, you’ll go to the disk/RAID layout screen — let me know when you’re there, I’ll help you set it up right (especially if you want RAID1 with /dev/sda
+ /dev/sdb
and keep /dev/sdc
for data).
π Go ahead and select:
Ubuntu-2204-jammy-amd64-base
That’s the sweet spot: long-term support (LTS), solid performance, and wide compatibility.
Perfect — you're now editing the install.conf
file used by Hetzner’s installimage
.
Right now, it's set to create RAID over all 3 drives, which we do not want, because /dev/sdc
is your 2TB data drive and should stay untouched.
✅ What to change:
You should exclude /dev/sdc
from the RAID, so it won't get wiped.
DRIVE3 /dev/sdc
πΉ 2. Keep software RAID enabled
SWRAID 1
SWRAIDLEVEL 1
HOSTNAME Vindazo
Ok, now we have to install apache https django copieren project ezv.. df -h Filesystem Size Used Avail Use% Mounted on tmpfs 13G 1.1M 13G 1% /run /dev/md2 434G 2.3G 410G 1% / tmpfs 63G 0 63G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/md1 989M 128M 810M 14% /boot tmpfs 13G 4.0K 13G 1% /run/user/0 But where is our HD with data?
Nice — system is installed and booted from RAID1 (/dev/md2
for /
, and /dev/md1
for /boot
). π
Now let's find and mount your data drive (/dev/sdc
, the 2TB HDD) that holds your old ssd_copy
, templates
, etc.
π§ Step 1: See if the drive is detected
lsblk
fdisk -l /dev/sdc
π§ If /dev/sdc1
exists (as before), try mounting it
mkdir /mnt/data
mount /dev/sdc1 /mnt/data
ls -lah /mnt/data
π ️ Optional: Auto-mount at boot
Once you're sure it’s the correct drive and data is intact, add it to /etc/fstab
.
Get the UUID:
blkid /dev/sdc1
/dev/sdc1: UUID="fcb83f9e-8d35-4b71-b150-51277101eefd" TYPE="ext4"
/etc/fstab
vim /etc/fstab
UUID=fcb83f9e-8d35-4b71-b150-51277101eefd /mnt/data ext4 defaults 0 2
mount -a
Let me know what lsblk
shows or if you hit anything weird while mounting. Once that’s done, we can move on to:
✅ Installing Apache
✅ Setting up Python + Django
✅ Copying in your project
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 447.1G 0 disk ├─sda1 8:1 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 [SWAP] ├─sda2 8:2 0 1G 0 part │ └─md1 9:1 0 1022M 0 raid1 /boot └─sda3 8:3 0 442.1G 0 part └─md2 9:2 0 442G 0 raid1 / sdb 8:16 0 447.1G 0 disk ├─sdb1 8:17 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 [SWAP] ├─sdb2 8:18 0 1G 0 part │ └─md1 9:1 0 1022M 0 raid1 /boot └─sdb3 8:19 0 442.1G 0 part └─md2 9:2 0 442G 0 raid1 / sdc 8:32 0 1.8T 0 disk └─sdc1 8:33 0 1.8T 0 part
Comments
Post a Comment