[TOC]

前言

https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lv_overview

LVM 逻辑卷管理器(logical volume manager

要创建一个 LVM 逻辑卷,就要将物理卷合并到卷组(VG)中。这就生成了磁盘空间池,用它可分配 LVM 逻辑卷(LV)。这个过程和将磁盘分区的过程类似。逻辑卷由文件系统和应用程序(比如数据库)使用。

LVM 逻辑卷组成

图 1.1 “LVM 逻辑卷组成” 演示一个简单 LVM 逻辑卷的组成

逻辑卷管理会根据物理存储生成提取层,以便创建逻辑存储卷。这样就比直接使用物理存储在很多方面提供了更大的灵活性。使用逻辑卷时不会受物理磁盘大小限制。

多逻辑卷

https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lv_overview

物理卷

https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_components

LVM 逻辑卷的底层物理存储单元是一个块设备,比如一个分区或整个磁盘。要在 LVM 逻辑卷中使用该设备,则必须将该设备初始化为物理卷(PV)。将块设备初始化为物理卷会在该设备的起始扇区附近放置一个标签。

默认情况下,LVM 标签是放在第二个 512 字节扇区。可以将标签放在最开始的四个扇区之一来覆盖这个默认设置。这样就允许在必要时 LVM 卷可与其他使用这些扇区的用户共同存在。

LVM 标签可为物理设备提供正确的识别和设备排序,因为在引导系统时,设备可以任何顺序出现。LVM 标签在重新引导和整个集群中保持不变。

LVM 标签可将该设备识别为 LVM 物理卷。它包含物理卷的随机唯一识别符(UUID)。它还以字节为单位记录块设备的大小,并记录 LVM 元数据在设备中的存储位置。

LVM 元数据包含系统中 LVM 卷组的配置详情。默认情况下,卷组中的每个物理卷中都会在其元数据区域保留一个一样的元数据副本。LVM 元数据很小,并以 ASCII 格式保存。

现在,LVM 允许在每个物理卷中保存 0、1 或者 2 个元数据副本。默认是保存一个副本。一旦设置了在物理卷中保存的元数据备份数目之后就无法再更改。第一个副本保存在设备的起始位置,紧挨着标签。如果有第二个副本,会将其放在设备的末尾。如果不小心写入了不同于想要写入的磁盘,从而覆盖了磁盘的起始部分,那么可以使用在设备末尾的元数据第二个副本恢复元数据。

2.2. 卷组

物理卷合并为卷组(VG)。这样就创建了磁盘空间池,并可使用它分配逻辑卷。

在卷组中,可用来分配的磁盘空间被分为固定大小的单元,我们称之为扩展。扩展是可进行分配的最小空间单元。在物理卷中,扩展指的是物理扩展。

逻辑卷会被分配成与物理卷扩展相同大小的逻辑扩展。因此卷组中逻辑卷的扩展大小都是一样的。卷组将逻辑扩展与物理扩展匹配。

2.3. LVM 逻辑卷

在 LVM 中是将卷组分为逻辑卷。以下小节论述了逻辑卷的不同类型

2.3.1. 线性卷

线性卷是将一个或者多个物理卷整合为一个逻辑卷

2.3.2. 条带逻辑卷

向 LVM 逻辑卷写入数据时,文件系统在基本物理卷之间部署数据。可以通过创建条带逻辑卷控制数据向物理卷写入的方法。对于大批量的读取和写入,这样可以提高数据输入/输出的效率。

2.3.3. RAID 逻辑卷

LVM 支持 RAID1/4/5/6/10。LVM RAID 卷有以下特征:

  • 通过利用 MD 内核驱动程序的 LVM 创建和管理 RAID 逻辑卷。
  • RAID1 映象可临时从该阵列中分离,且稍后可合并回该阵列中。
  • LVM RAID 卷支持快照。

2.3.4. 精简配置逻辑卷(精简卷)

逻辑卷可进行简化配置。这样可创建超出可用扩展的逻辑卷。使用精简配置,可以管理剩余空间的存储池,也称精简池。需要时应用程序可将精简池分配给任意数量的设备。然后在之后应用程序实际写入该逻辑卷时创建设备,将其绑定到精简池中。需要时可动态扩展该精简池,以便进行有效的存储空间分配。

2.3.5. 快照卷

LVM 的快照功能为您提供了在某个特定时刻,在不导致服务中断的情况下创建设备的虚拟映射功能。在提取快照后,当对原始设备进行修改时,快照功能可生成有变化的数据区域的副本,以便重建该设备的状态

2.3.6. 精简配置快照卷

Red Hat Enterprise Linux 提供精简配置的快照卷支持。精简快照卷可将很多虚拟设备储存在同一数据卷中。这样可简化管理,并允许在快照卷之间共享数据。

所有 LVM 快照卷以及所有精简快照卷均无法在集群中跨节点支持。该快照卷必须只能以独占方式在一个集群节点中激活。

2.3.7. 缓存卷

从 Red Hat Enterprise Linux 7.1 发行本开始,LVM 支持使用快速块设备(比如 SSD 驱动器)作为较大慢速块设备的回写或写入缓存。用户可创建缓存逻辑卷以改进其现有逻辑卷的性能,或创建由一个小且快速设备与一个大慢速设备组成的新缓存逻辑卷。

LVM 热机扩容 6T sata disk

-> https://techjogging.com/expand-logical-volume-in-centosrhel-7.html

https://www.tecmint.com/extend-and-reduce-lvms-in-linux/

Resize an LVM partition on a GPT drive

EXTENDING GPT/LVM DISK ON CENTOS

https://linuxize.com/post/how-to-mount-and-unmount-file-systems-in-linux/

How to create an XFS Filesystem

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
yum install -y gdisk
root@localhost overlay2]# gdisk /dev/sdl
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present

Creating new GPT entries.

Command (? for help): ? #<----####
b back up GPT data to a file
c change a partition's name
d delete a partition
i show detailed information on a partition
l list known partition types
n add a new partition
o create a new empty GUID partition table (GPT)
p print the partition table
q quit without saving changes
r recovery and transformation options (experts only)
s sort partitions
t change a partition's type code
v verify disk
w write table to disk and exit
x extra functionality (experts only)
? print this menu

Command (? for help): n #<----####
Partition number (1-128, default 1):
First sector (34-11718748126, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-11718748126, default = 11718748126) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): L
0700 Microsoft basic data 0c01 Microsoft reserved 2700 Windows RE
3000 ONIE boot 3001 ONIE config 4100 PowerPC PReP boot
4200 Windows LDM data 4201 Windows LDM metadata 7501 IBM GPFS
7f00 ChromeOS kernel 7f01 ChromeOS root 7f02 ChromeOS reserved
8200 Linux swap 8300 Linux filesystem 8301 Linux reserved
8302 Linux /home 8400 Intel Rapid Start 8e00 Linux LVM
a500 FreeBSD disklabel a501 FreeBSD boot a502 FreeBSD swap
a503 FreeBSD UFS a504 FreeBSD ZFS a505 FreeBSD Vinum/RAID
a580 Midnight BSD data a581 Midnight BSD boot a582 Midnight BSD swap
a583 Midnight BSD UFS a584 Midnight BSD ZFS a585 Midnight BSD Vinum
a800 Apple UFS a901 NetBSD swap a902 NetBSD FFS
a903 NetBSD LFS a904 NetBSD concatenated a905 NetBSD encrypted
a906 NetBSD RAID ab00 Apple boot af00 Apple HFS/HFS+
af01 Apple RAID af02 Apple RAID offline af03 Apple label
af04 AppleTV recovery af05 Apple Core Storage be00 Solaris boot
bf00 Solaris root bf01 Solaris /usr & Mac Z bf02 Solaris swap
bf03 Solaris backup bf04 Solaris /var bf05 Solaris /home
bf06 Solaris alternate se bf07 Solaris Reserved 1 bf08 Solaris Reserved 2
bf09 Solaris Reserved 3 bf0a Solaris Reserved 4 bf0b Solaris Reserved 5
c001 HP-UX data c002 HP-UX service ea00 Freedesktop $BOOT
eb00 Haiku BFS ed00 Sony system partitio ed01 Lenovo system partit
......
Hex code or GUID (L to show codes, Enter = 8300): 8e00 #<----####
Changed type of partition to 'Linux LVM'

Create a new logical volume partition. Enter 8E00 partition code.


Command (? for help): p #<----####
Disk /dev/sdl: 11718748160 sectors, 5.5 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 95DD2FFE-2BF6-4D2E-B9D0-F217095575A1
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 11718748126
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number Start (sector) End (sector) Size Code Name
1 2048 11718748126 5.5 TiB 8E00 Linux LVM

Command (? for help): w #<----####

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdl.
The operation has completed successfully


# gdisk /dev/sdl
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT


# Notify the operation system about changes in the partition tables.
# man partprobe - inform the OS of partition table changes
$partprobe

# fdisk -l /dev/sdl
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdl: 6000.0 GB, 5999999057920 bytes, 11718748160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 95DD2FFE-2BF6-4D2E-B9D0-F217095575A1


# Start End Size Type Name
1 2048 11718748126 5.5T Linux LVM Linux LVM


# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 199.61g
root centos -wi-ao---- 50.00g
swap centos -wi-ao---- <27.85g

1. Create a physical volume.

[root@localhost overlay2]# pvcreate /dev/sdl1
Physical volume "/dev/sdl1" successfully created.
[root@localhost overlay2]# vgextend centos /dev/sdl1
Volume group "centos" successfully extended

# you must check is the one that states Free PE / Size. 1430510
[root@localhost overlay2]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size <5.73 TiB
PE Size 4.00 MiB
Total PE 1501540
Alloc PE / Size 71030 / 277.46 GiB
Free PE / Size 1430510 / <5.46 TiB
VG UUID k0WW0H-vB7z-EJpQ-PFnf-SBcq-BFMh-H3jDyf


[root@localhost overlay2]# lvextend -l+1430510 centos/root # -l+ 没有M/G之类的标示的话,默认是PE
Size of logical volume centos/root changed from 50.00 GiB (12800 extents) to <5.51 TiB (1443310 extents).
Logical volume centos/root successfully resized.
[root@localhost overlay2]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 50G 38G 13G 76% /
devtmpfs devtmpfs 32G 0 32G 0% /dev
tmpfs tmpfs 32G 0 32G 0% /dev/shm
tmpfs tmpfs 32G 594M 31G 2% /run
tmpfs tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 145M 870M 15% /boot
/dev/mapper/centos-home xfs 200G 33M 200G 1% /home
tmpfs tmpfs 6.3G 0 6.3G 0% /run/user/0
overlay overlay 50G 38G 13G 76% /var/lib/docker/overlay2/659d39534fdf967fe3f38e8abb42453c089582f342138b355ae087b3d7b5a70b/merged
overlay overlay 50G 38G 13G 76% /var/lib/docker/overlay2/e585c0c1a1b5122c12b65b81eff638c7e958d54e13a1a50c9340badd78ac47ae/merged
/dev/sdb xfs 5.5T 13G 5.5T 1% /mnt/data1
/dev/sdc xfs 5.5T 13G 5.5T 1% /mnt/data2
/dev/sdd xfs 5.5T 13G 5.5T 1% /mnt/data3
/dev/sde xfs 5.5T 13G 5.5T 1% /mnt/data4
overlay overlay 50G 38G 13G 76% /var/lib/docker/overlay2/57651a003d92c863886fac2619c5ef493111155b662be0cfee8869d8b02fec55/merged
/dev/sdm xfs 5.5T 1.7T 3.9T 31% /mnt/sdm
[root@localhost overlay2]# xfs_growfs /dev/centos/root
# choe:maybe this command work ? resize2fs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=512 agcount=4, agsize=3276800 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=13107200, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=6400, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 13107200 to 1477949440
[root@localhost overlay2]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 5.6T 38G 5.5T 1% /
devtmpfs devtmpfs 32G 0 32G 0% /dev
tmpfs tmpfs 32G 0 32G 0% /dev/shm
tmpfs tmpfs 32G 594M 31G 2% /run
tmpfs tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 145M 870M 15% /boot
/dev/mapper/centos-home xfs 200G 33M 200G 1% /home
tmpfs tmpfs 6.3G 0 6.3G 0% /run/user/0
overlay overlay 5.6T 38G 5.5T 1% /var/lib/docker/overlay2/659d39534fdf967fe3f38e8abb42453c089582f342138b355ae087b3d7b5a70b/merged
overlay overlay 5.6T 38G 5.5T 1% /var/lib/docker/overlay2/e585c0c1a1b5122c12b65b81eff638c7e958d54e13a1a50c9340badd78ac47ae/merged
/dev/sdb xfs 5.5T 13G 5.5T 1% /mnt/data1
/dev/sdc xfs 5.5T 13G 5.5T 1% /mnt/data2
/dev/sdd xfs 5.5T 13G 5.5T 1% /mnt/data3
/dev/sde xfs 5.5T 13G 5.5T 1% /mnt/data4
overlay overlay 5.6T 38G 5.5T 1% /var/lib/docker/overlay2/57651a003d92c863886fac2619c5ef493111155b662be0cfee8869d8b02fec55/merged
/dev/sdm xfs 5.5T 1.7T 3.9T 31% /mnt/sdm

how to increase the size for /dev/mapper/centos-root?

TL;DR

For VG: vg0, LV:lv0 and new disk /dev/sdb. Extending 5GB

  1. Check available space on the VG: vgdisplay. If enough go to 4
  2. If you don’t have space add a disk and create a PV: pvcreate /dev/sdb1
  3. Extend the VG: vgextend vg0 /dev/sdb1
  4. Extend the LV: lvextend /dev/vg0/lv0 -L 5G
  5. Check: lvscan
  6. Resize the file system: resize2fs /dev/vg0/lv0
  7. Check: df -h | grep l

mount

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44


mount

device_name on directory type filesystem_type (options)
mount [OPTION...] DEVICE_NAME DIRECTORY

mount -t TYPE DEVICE_NAME DIRECTORY
mount -o OPTIONS DEVICE_NAME DIRECTORY Multiple options can be provided as a comma-separated list (do not insert a space after a comma).
# mkdir /mnt/data1
# mount -t xfs /dev/sdb /mnt/data1


#du -sch * --exclude=mnt
0 bin
112M boot
18M choe
0 dev
37M etc
0 home
0 lib
0 lib64
0 media
38M opt
du: cannot access ‘proc/120264/task/120264/fd/4’: No such file or directory
du: cannot access ‘proc/120264/task/120264/fdinfo/4’: No such file or directory
du: cannot access ‘proc/120264/fd/4’: No such file or directory
du: cannot access ‘proc/120264/fdinfo/4’: No such file or directory
0 proc
1.9G root
594M run
0 sbin
0 srv
0 sys
718M tmp
1.8G usr
1.2G var
6.2G total



pvresize /dev/sda2
lvextend /dev/mapper/centos-root -l +100%FREE
xfs_growfs /dev/mapper/centos-root

小结:

GPT大硬盘 用gdisk/parted 创建gpt分区 分区类型是 linux LVM 代码 8e00

MBR与GPT分区共同组成 LVM 目前看没有问题的

GPT LVM 与 MBR LVM 区别是按分区的工具不同而且,代码都是8e00 linux LVM

这次pvcreate 的是gpt 的 partition type =8e00 的分区,Redhat官网上说整个硬盘也可以直接用于创建pvcreate,以后有机会试试。

Refer: Expand Logical Volume in CentOS/RHEL 7

[TOC]

Column Selection

WARNING! This documentation is for an old, unsupported version of Sublime Text. Please check out the current docs.

Overview

Column Selection can be used to select a rectangular area of a file. Column selection doesn’t operate via a separate mode, instead it makes use of multiple selections.

You can use additive selections to select multiple blocks of text, or subtractive selections to remove a block.

Using the Mouse

Different mouse buttons are used on each platform:

OS X

  • Left Mouse Button + Option

  • OR: Middle Mouse Button

  • Add to selection: Command

  • Subtract from selection: Command+Shift

Windows

  • Right Mouse Button + Shift

  • OR: Middle Mouse Button

  • Add to selection: Ctrl

  • Subtract from selection: Alt

Linux

  • Right Mouse Button + Shift

  • Add to selection: Ctrl

  • Subtract from selection: Alt

Using the Keyboard

OS X

  • Ctrl + Shift + Up
  • Ctrl + Shift + Down

Windows

  • Ctrl + Alt + Up
  • Ctrl + Alt + Down

Linux

  • Ctrl + Alt + Up
  • Ctrl + Alt + Down

[TOC]

block storage

you would treat it like a normal disk. You could format it with a filesystem and store files on it, combine multiple devices into a RAID array, or configure a database to write directly to the block device, avoiding filesystem overhead entirely.

另外, network-attached block storage devices (NAS设备,如Synology DSM, QNAP QTS) often have some unique advantages over normal hard drives:

  • You can easily take live snapshots of the entire device for backup purposes
  • Block storage devices can be resized to accommodate growing needs
  • You can easily detach and move block storage devices between machines

Some advantages of block storage are:

  • Block storage is a familiar paradigm. People and software understand and support files and filesystems almost universally
  • Block devices are well supported. Every programming language can easily read and write files
  • Filesystem permissions and access controls are familiar and well-understood
  • Block storage devices provide low latency IO, so they are suitable for use by databases.

The disadvantages of block storage are:

  • Storage is tied to one server at a time
  • Blocks and filesystems have limited metadata about the blobs of information they’re storing (creation date, owner, size). Any additional information about what you’re storing will have to be handled at the application and database level, which is additional complexity for a developer to worry about
  • You need to pay for all the block storage space you’ve allocated, even if you’re not using it
  • You can only access block storage through a running server
  • Block storage needs more hands-on work and setup vs object storage (filesystem choices, permissions, versioning, backups, etc.)

由于快速的IO 特性,块存储尤其使用于传统数据库。

using OpenStack Cinder, Ceph, or the built-in iSCSI service available on many NAS devices.

object storage

object storage is the storage and retrieval of unstructured blobs of data and metadata using an HTTP API. Instead of breaking files down into blocks to store it on disk using a filesystem, we deal with whole objects stored over the network. These objects could be an image file, logs, HTML files, or any self-contained blob of bytes. They are unstructured because there is no specific schema or format they need to follow

object storage is the storage and retrieval of unstructured blobs of data and metadata using an HTTP API. Instead of breaking files down into blocks to store it on disk using a filesystem, we deal with whole objects stored over the network. These objects could be an image file, logs, HTML files, or any self-contained blob of bytes. They are unstructured because there is no specific schema or format they need to follow

Some advantages of object storage are:

  • A simple HTTP API, with clients available for all major operating systems and programming languages
  • A cost structure that means you only pay for what you use
  • Built-in public serving of static assets means one less server for you to run yourself
  • Some object stores offer built-in CDN integration, which cache your assets around the globe to make downloads and page loads faster for your users
  • Optional versioning means you can retrieve old versions of objects to recover from accidental overwrites of data
  • Object storage services can easily scale from modest needs to really intense use-cases without the developer having to launch more resources or rearchitect to handle the load
  • Using an object storage service means you don’t have to maintain hard drives and RAID arrays, as that’s handled by the service provider
  • Being able to store chunks of metadata alongside your data blob can further simplify your application architecture

Some disadvantages of object storage are:

  • You can’t use object storage services to back a traditional database, due to the high latency of such services
  • Object storage doesn’t allow you to alter just a piece of a data blob, you must read and write an entire object at once. This has some performance implications. For instance, on a filesystem, you can easily append a single line to the end of a log file. On an object storage system, you’d need to retrieve the object, add the new line, and write the entire object back. This makes object storage less ideal for data that changes very frequently
  • Operating systems can’t easily mount an object store like a normal disk. There are some clients and adapters to help with this, but in general, using and browsing an object store is not as simple as flipping through directories in a file browser

Because of these properties, object storage is useful for hosting static assets, saving user-generated content such as images and movies, storing backup files, and storing logs, for example.

Minio, a popular object storage server written in the Go language

or Ceph, or OpenStack Swift.

[TOC]

Model–view–viewmodel
https://vuejs.org/v2/examples/index.html
Introducing JSX
Vue Devtools https://github.com/vuejs/vue-devtools#vue-devtools
older bundlers like browserify or webpack 1.
modern bundlers like webpack 2 or Rollup.
Single File Components.
arrow functions
How do I successfully make an HTTP PUT request in Vue?
pagekit/vue-resource https://github.com/pagekit/vue-resource/blob/develop/docs/http.md
Create a Single Page App With Go, Echo and Vue

vue和vue-resource开发版 CDN

1
2
<script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script>
<script src="https://cdn.jsdelivr.net/npm/vue-resource@1.5.1"></script>

vue mount点的问题

el 或者vm.$mount() 指定的挂载的元素,将被Vue产生的DOM完全替换掉,所以不推荐挂载Vue实例到html 或者是 body 元素,否则,vue不显示并且输出一个警告信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

<!-- HTML file -->
<html>
<head></head>
<body>
</body>
</html>

// JavaScript file
import Vue from 'vue';

// Example 1
new Vue({
el: 'html', // VUE_BAD_MOUNT alarm
template: '<div>Hello!</div>'
});

// Example 2
const MyComponent = Vue.extend({
template: '<div>Hello!</div>'
});
new MyComponent().$mount('body'); // VUE_BAD_MOUNT alarm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

<!-- HTML file -->
<html>
<head></head>
<body>
<div id="app1"></div>
<div id="app2"></div>
</body>
</html>

// JavaScript file
import Vue from 'vue';

// Example 1
new Vue({
el: '#app1',
template: '<div>Hello!</div>'
});

// Example 2
const MyComponent = Vue.extend({
template: '<div>Hello!</div>'
});
new MyComponent().$mount('#app2');

Vue.js is detected on this page. Devtools inspection is not available because it’s in production mode or explicitly disabled by the author.

You are probably using Vue from CDN, and probably using a production build (dist/vue.min.js). Either replace it with a dev build (dist/vue.js) or add Vue.config.devtools = true to the main js file.

1
2
3
4
5
6
7
new Vue({
el: '#app',
data: {
message: 'hello world'
}
});
Vue.config.devtools = true;
1
2
3
4
5
6
7

Set the following before any references to Vue:

Vue.config.devtools = true
then also close and repopen the browser or developer console

Good Luck...
1
2
3
4

Vue.config.devtools = true;
new Vue({
el: '#app',

Select element with v-model and v-on=”change:onChange” works differently in FF

computed vs watch

on-change doesn’t work on v-select

v-model vs v-bind

1
2
3
4
5
6
7
8
9
10
11
//v-model="vueData" form input 和 vue data 双向更新     VueData <=> UI
<input v-model="message" >
//v-bind:k="vueData" 单向更新 vue Data => UI
<input v-bind:value="message">


var vm=vue{
data:{
message: '',
}
}

layout 布局

https://getbootstrap.com/docs/4.0/layout/grid/

和组件通讯的方式 props

props

components have isolated scopes of their own

发送消息

https://vuejs.org/v2/guide/components.html

calling the built-in $emit method, passing the name of the event:

1
2
3
4
5
6
7
8
9
10
11

<blog-post
...
v-on:enlarge-text="postFontSize += 0.1"
></blog-post>



<button v-on:click="$emit('enlarge-text')">
Enlarge text
</button>

Thanks to the v-on:enlarge-text="postFontSize += 0.1" listener, the parent will receive the event and update postFontSize value.

https://vuejs.org/v2/guide/components.html

1
2
3
4
5
6
7
8
9
10
11
12

<button v-on:click="$emit('enlarge-text', 0.1)">
Enlarge text
</button>


Then when we listen to the event in the parent, we can access the emitted event’s value with $event:

<blog-post
...
v-on:enlarge-text="postFontSize += $event"
></blog-post>

Or, if the event handler is a method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14


<blog-post
...
v-on:enlarge-text="onEnlargeText"
></blog-post>

Then the value will be passed as the first parameter of that method:

methods: {
onEnlargeText: function (enlargeAmount) {
this.postFontSize += enlargeAmount
}
}

the W3C rules for custom tag names

component names in the Style Guide.

bootstrap datepicker

https://codepen.io/Yuping/pen/xqrjE

How do I successfully make an HTTP PUT request in Vue?

fetch put method

1
2
3
4
5
6
7
8
9
10
11

fetch(`https://tap-on-it-exercise-backend.herokuapp.com/products/${product.likes}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
// data you intend to send as JSON to the server
whatever: 'ding!'
})
})

Vue el-date-picker 组件时间格式化方式

<el-date-picker size="small" v-model="editData.editItem.startTime"
                            type="datetime"
                            placeholder="选择日期时间" value-format=" yyyy-MM-dd HH:mm" format="yyyy-MM-dd HH:mm">
            </el-date-picker>
1
2

Don’t use arrow functions on an options property or callback, such as created: () => console.log(this.a) or vm.$watch('a', newValue => this.myMethod()). Since an arrow function doesnt have a this, this will be treated as any other variable and lexically looked up through parent scopes until found, often resulting in errors such as Uncaught TypeError: Cannot read property of undefined or Uncaught TypeError: this.myMethod is not a function.

Deploy a VueJS web app with nginx on Ubuntu 18.04

https://snipcart.com/blog/vue-render-functions

问题 error

$http.get(…).success is not a function

[TOC]

https://docs.min.io/cn/minio-quickstart-guide.html
https://docs.min.io/cn/setup-nginx-proxy-with-minio.html
https://tonybai.com/2020/03/16/build-high-performance-object-storage-with-minio-part1-prototype/
Linux storage stack diagram
服务器配置 Guide
MinIO JavaScript Library for Amazon S3 Compatible Cloud Storage
Upload Files Using Pre-signed URLs
MinIO Admin Complete Guide
minio-js github
MinIO Multi-user Quickstart Guide
JavaScript Client API Reference
亚马逊认证 Authenticating Requests: Using Query Parameters (AWS Signature Version 4)
MinIO Client (mc)

minIO

mc :MinIO Client (mc)

How to Choose Your Red Hat Enterprise Linux File System

XFS vs Ext4

If both your server and your storage device are large, XFS is likely to be the best choice

In general, Ext3 or Ext4 is better if an application uses a single read/write thread and small files, while XFS shines when an application uses multiple read/write threads and bigger files.

mount disk

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25



docker run -p 9000:9000 --name minio \
> -v /mnt/data1:/data1 \
> -v /mnt/data2:/data2 \
> -v /mnt/data3:/data3 \
> -v /mnt/data4:/data4 \
> -v /root/storage/config:/root/.minio \
> minio/minio server /data1 /data2 /data3 /data4
Formatting 1st zone, 1 set(s), 4 drives per set.
WARNING: Host local has more than 2 drives of set. A host failure will result in data becoming unavailable.
Status: 4 Online, 0 Offline.
Endpoint: http://172.17.0.4:9000 http://127.0.0.1:9000

Browser Access:
http://172.17.0.4:9000 http://127.0.0.1:9000

Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide
Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'
1
2
3
4
5
6
7
8
9
10
11
docker pull minio/minio
docker run -d -p 9000:9000 --name minio \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
-e "MINIO_REGION_NAME=cn_shanghai" \
-v /mnt/data1:/data1 \
-v /mnt/data2:/data2 \
-v /mnt/data3:/data3 \
-v /mnt/data4:/data4 \
-v /root/storage/config:/root/.minio \
minio/minio server /data1 /data2 /data3 /data4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31


├── data1
│   └── bucket-test1
│   └── gcc-10.2.0.tar.gz
│   ├── 0a74abde-131a-4153-a514-6df267a1012d
│   │   └── part.1
│   └── xl.meta
├── data2
│   └── bucket-test1
│   └── gcc-10.2.0.tar.gz
│   ├── 0a74abde-131a-4153-a514-6df267a1012d
│   │   └── part.1
│   └── xl.meta
├── data3
│   └── bucket-test1
│   └── gcc-10.2.0.tar.gz
│   ├── 0a74abde-131a-4153-a514-6df267a1012d
│   │   └── part.1
│   └── xl.meta
└── data4
└── bucket-test1
└── gcc-10.2.0.tar.gz
├── 0a74abde-131a-4153-a514-6df267a1012d
│   └── part.1
└── xl.meta

16 directories, 8 files
[root@localhost mnt]# cat data1/bucket-test1/gcc-10.2.0.tar.gz/xl.meta
XL2 1 ��Versions���Type�V2Obj��ID��DDir�
t��AS�m�g�-�EcAlgo�EcM�EcN�EcBSize���EcIndex�EcDist��CSumAlgo�PartNums��PartETags���PartSizes���2y�PartASizes���2y�Size��2y�MTime�*�2ŧMetaSys��MetaUsr��etag� 941a8674ea2eeb33f5c30ecf08124874�content-type�application/x-gzip[root@localhost mnt]#

Upload Files Using Pre-signed URLs

https://docs.min.io/docs/upload-files-from-browser-using-pre-signed-urls.html

1
2

npm install --save minio

https://docs.min.io/docs/minio-admin-complete-guide.html#trace

SignatureDoesNotMatch

SignatureDoesNotMatch on Minio Server Docker #794

https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html

https://www.rapidspike.com/blog/s3-signature-not-match-error-using-pre-signed-request/

presignedGetObject’s url return SignatureDoesNotMatch error #6546

https://docs.min.io/docs/javascript-client-api-reference#presignedPutObject

Getting 403 (Forbidden) when uploading to S3 with a signed URL

SignatureDoesNotMatch on Minio Server Docker #794

SignatureDoesNotMatch

docker exec -it minio /bin/sh

1
2
3
4
5
6
7
8
9
10
11
12

http://s3.1024game.cn:9000/uploads/%E5%BA%94%E7%94%A8%E5%95%86%E5%BA%97%E9%A1%B9%E7%9B%AE-Win.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIOSFODNN7EXAMPLE%2F20200820%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200820T075525Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=bd6bb248a667e6f3ee14f41d9ff0a79a8bba314bd43ec31f0d7125d94b06c428

<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<Key>应用商店项目-Win.zip</Key>
<BucketName>uploads</BucketName>
<Resource>/uploads/应用商店项目-Win.zip</Resource>
<RequestId>162CEAD75B0E8CC9</RequestId>
<HostId>6f9b2b94-d4fe-4687-8a79-7085c2309699</HostId>
</Error>

aws S3 signatureV4

https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html

1
2
3
4
5
6
7
8

https://s3.amazonaws.com/examplebucket/test.txt
?X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=<your-access-key-id>/20130721/us-east-1/s3/aws4_request
&X-Amz-Date=20130721T201207Z
&X-Amz-Expires=86400
&X-Amz-SignedHeaders=host
&X-Amz-Signature=<signature-value>

minio config

config ~/.minio

nginx proxy

https://docs.min.io/docs/setup-nginx-proxy-with-minio.html

s3 Uploading an object

https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadInSingleOp.html

mc docker

https://hub.docker.com/r/minio/mc/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
docker pull minio/mc
docker run -it --entrypoint=/bin/sh minio/mc
# Add a MinIO Storage Service
mc config host add <ALIAS> <YOUR-MINIO-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY>
Alias is simply a short name to your MinIO service. MinIO end-point, access and secret keys are supplied by your MinIO service. Admin API uses "S3v4" signature and cannot be changed.

mc config host add minio http://192.168.1.2:9000 YOUR-ACCESS-KEY YOUR-SECRET-KEY
# Test Your Setup
mc admin info minio
# alias

alias minfo='mc admin info'

#Global Options

mc admin --debug info minio




[root@localhost storage]# ./mc admin info minio
● 127.0.0.1:9000
Uptime: 1 week
Version: 2020-08-13T02:39:50Z
Network: 1/1 OK
Drives: 4/4 OK

204 MiB Used, 3 Buckets, 12 Objects
4 drives online, 0 drives offline

./mc admin trace -a -v --debug minio

http://www.lw007.cn/docs/minio-admin-complete-guide.html

mc admin trace

MinIO Server Config Guide

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
storage:
restart: always
networks:
default:
aliases:
- bor.minio
image: minio/minio:latest
ports:
- 4554:80
environment:
MINIO_DOMAIN: "s3-us-west-1.amazonaws.com"
MINIO_ACCESS_KEY: ACCESS_KEY_ID
MINIO_SECRET_KEY: secret123
MINIO_HTTP_TRACE: /dev/stdout # move to mc admin trace
command: minio server --address 0.0.0.0:80 /var/data/fakes3
volumes:
- storage_data:/var/data/fakes3:delegated

debugging 调试

https://github.com/minio/minio/tree/master/docs/debugging

1
2
3
4
5
6
7
8
9
10
11
12

Default trace is succinct only to indicate the API operations being called and the HTTP response status.
mc admin trace myminio

# To trace entire HTTP Request
mc admin trace --verbose myminio

# + also internode communication , add flag: --all
mc admin trace --verbose --all myminio


mc admin --debug info minio

on-board diagnostics

1
2

mc admin obd myminio

Deploying S3 Stateful Containers - Minio(vmware) kubectl

permanent url

1
2

var publicUrl = minioClient.protocol + '//' + minioClient.host + ':' + minioClient.port + '/' + minioBucket + '/' + obj.name

bucket policy

Presigned URLs are valid only for a maximum of 7 days. This is mandated by S3 Spec (Ref: http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html).

For permanent sharing you can consider buckets policies - https://docs.minio.io/docs/minio-client-complete-guide#policy

how to generate share url permanent? #5180

Once you set the policy on a bucket like

1
mc policy set public myminio/bucketname

You can use the URL: miniohost:9000/bucketname/object to access the object

#mc policy public minio/test

Access permission for minio/test is set to public

#wget http://10.39.0.45:9000/test/types-of-mounts.png

it’s ok now .ths.
I send url wget http://10.39.0.45:9000/minio/test/types-of-mounts.png before,so i’m wrong

https://blog.nikhilbhardwaj.in/2020/02/25/minio-bucket-policy/

I don’t know if this is a counter example or a different method. If I use the s3.getSignedUrl I can generate urls that are longer than 7 days.
JS Code

1
2
var urlParams= {"Bucket":"opennote","Key":"ovoay3yj5uky.png","Expires":77760000}
s3.getSignedUrl("getObject",urlParams,function(err,data){console.dir(err);console.dir(data)})

This gives a signature of ?AWSAccessKeyId=tests&Expires=1595816882&Signature=HQbjEiQUrqW87ShZSjVVOeHnz0o%3D
Which is valid for 900 days from now.

S3 and Minio accepts this signature and display the object

Why does it work?

Yes only in AWS Signature v2 (legacy), AWS Signature v4 has limited it to maximum of 7 days.

It would be good if somebody point out in docs that polycy prefix should not start with slash / when you type it in via web intrerface, got fooled by that slash hard before checked policies via mc.

1
2
via cmd
mc policy public myminio/bucketname
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Here is the policy via SDK

{
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Resource": "arn:aws:s3:::mybucketname",
"Sid": ""
},
{
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Resource": "arn:aws:s3:::mybucketname/*",
"Sid": ""
}
],
"Version": "2012-10-17"
}
Replace mybucketname with the appropriate bucket name

All I did create a new location in nginx something like /bucketname and in the block add root to the local path for the bucket folder.

minio policy help

PERMISSION:
Allowed policies are: [none, download, upload, public].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56

mc policy public minio/pub
Name:
mc policy - manage anonymous access to buckets and objects

USAGE:
mc policy [FLAGS] set PERMISSION TARGET
mc policy [FLAGS] set-json FILE TARGET
mc policy [FLAGS] get TARGET
mc policy [FLAGS] get-json TARGET
mc policy [FLAGS] list TARGET

FLAGS:
--recursive, -r list recursively
--config-dir value, -C value path to configuration folder (default: "/root/.mc")
--quiet, -q disable progress bar display
--no-color disable color theme
--json enable JSON formatted output
--debug enable debug output
--insecure disable SSL certificate verification
--help, -h show help

PERMISSION:
Allowed policies are: [none, download, upload, public].

FILE:
A valid S3 policy JSON filepath.

EXAMPLES:
1. Set bucket to "download" on Amazon S3 cloud storage.
$ mc policy set download s3/burningman2011

2. Set bucket to "public" on Amazon S3 cloud storage.
$ mc policy set public s3/shared

3. Set bucket to "upload" on Amazon S3 cloud storage.
$ mc policy set upload s3/incoming

4. Set policy to "public" for bucket with prefix on Amazon S3 cloud storage.
$ mc policy set public s3/public-commons/images

5. Set a custom prefix based bucket policy on Amazon S3 cloud storage using a JSON file.
$ mc policy set-json /path/to/policy.json s3/public-commons/images

6. Get bucket permissions.
$ mc policy get s3/shared

7. Get bucket permissions in JSON format.
$ mc policy get-json s3/shared

8. List policies set to a specified bucket.
$ mc policy list s3/shared

9. List public object URLs recursively.
$ mc policy --recursive links s3/shared/
[root@localhost ~]# mc policy get minio/uploads
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

package main
import (
"log"
"time"
"github.com/minio/minio-go"
)

func main() {
s3Client, err := minio.New("172.17.0.2:9000", "minio", "minio123", false)
if err != nil {
log.Fatalln(err)
}
err = s3Client.MakeBucket("test", "us-east-1")
if err != nil {
log.Fatal(err)
}
presignedURL, err := s3Client.PresignedPutObject("test", "my-objectname", time.Duration(1000)*time.Second)
if err != nil {
log.Fatalln(err)
}
log.Println(presignedURL)
}
1
curl -X PUT http://172.17.0.2:9000/test/my-objectname\?X-Amz-Algorithm\=AWS4-HMAC-SHA256\&X-Amz-Credential\=minio%2F20181129%2Fus-east-1%2Fs3%2Faws4_request\&X-Amz-Date\=20181129T030925Z\&X-Amz-Expires\=1000\&X-Amz-SignedHeaders\=host\&X-Amz-Signature\=babb4404c979968de5d21e9e654ad1c73f3ac4fed014c0edaeae639c2d159409 -H 'Content-Type: application/json'  -d  "{ \"blah\" : \"blah\"}"

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13

minio:
image: minio/minio:latest
restart: always
environment:
- MINIO_DOMAIN=minio.my.domain.com
- MINIO_REGION=eu-west-1
volumes:
- '/mnt/docker/minio/data:/data'
- '/mnt/docker/minio/config:/root/.minio'
ports:
- "9898:9000"
command: "server /data"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

upstream minio {
server 127.0.0.1:9898;
}

server {
listen 80;
server_name minio.my.domain.com;
return 302 https://$server_name$request_uri;
}

server {
listen 443 ssl http2;
server_name minio.my.domain.com;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
location / {
proxy_pass http://minio;
}
}

mc config

1
2
3
4
5
6
7
8
9
# 查询当前的主机host 配置,配置后才能管理
mc config host ls

mc config host add <ALIAS> <YOUR-MINIO-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY>
mc config host add minio http://192.168.1.51:9000 BKIKJAA5BMMU2RHO6IBB V7f1CwQqAcwo80UEIJEjc5gVQUSSx5ohQ9GSrr12

alias minfo='mc admin info'

mc admin --debug info minio
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

mc admin --help
NAME:
mc admin - manage MinIO servers

USAGE:
mc admin COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...]

COMMANDS:
service restart and stop all MinIO servers
update update all MinIO servers
info display MinIO server information
user manage users
group manage groups
policy manage policies defined in the MinIO server
config manage MinIO server configuration
heal heal disks, buckets and objects on MinIO server
profile generate profile data for debugging purposes
top provide top like statistics for MinIO
trace show http trace for MinIO server
console show console logs for MinIO server
prometheus manages prometheus config
kms perform KMS management operations
obd run on-board diagnostics
bucket manage buckets defined in the MinIO server

FLAGS:
--config-dir value, -C value path to configuration folder (default: "/root/.mc")
--quiet, -q disable progress bar display
--no-color disable color theme
--json enable JSON formatted output
--debug enable debug output
--insecure disable SSL certificate verification
--help, -h show help

supports simple queuing service

Minio is written in Go, comes with a command line client plus a browser interface, and supports simple queuing service for Advanced Message Queuing Protocol (AMQP), Elasticsearch, Redis, NATS, and PostgreSQL targets

minio Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14


sudo nano /etc/default/minio

MINIO_ACCESS_KEY="minio"
MINIO_VOLUMES="/usr/local/share/minio/"
MINIO_OPTS="-C /etc/minio --address your_server_ip:9000"
MINIO_SECRET_KEY="miniostorage"


MINIO_ACCESS_KEY: This sets the access key you will use to access the Minio browser user interface.
MINIO_SECRET_KEY: This sets the private key you will use to complete your login credentials into the Minio interface. This tutorial has set the value to miniostorage, but we advise choosing a different, more complicated password to secure your server.
MINIO_VOLUMES: This identifies the storage directory that you created for your buckets.
MINIO_OPTS: This changes where and how the server serves data. The -C flag points Minio to the configuration directory it should use, while the --address flag tells Minio the IP address and port to bind to. If the IP address is not specified, Minio will bind to every address configured on the server, including localhost and any Docker-related IP addresses, so directly specifying the IP address here is recommended. The default port 9000 can be changed if you would like.

Installing the Minio Systemd Startup Script

curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/minio.service

/etc/systemd/system/minio.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

[Unit]
Description=MinIO
Documentation=https://docs.min.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio

[Service]
WorkingDirectory=/usr/local/

User=minio-user
Group=minio-user

EnvironmentFile=/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"

ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES

# Let systemd restart this service always
Restart=always

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no

[Install]
WantedBy=multi-user.target

# Built for ${project.name}-${project.version} (${project.name})

sudo mv minio.service /etc/systemd/system

sudo systemctl daemon-reload

sudo systemctl enable minio

sudo systemctl start minio

sudo systemctl status minio

sudo ufw allow 9000

sudo ufw enable

Minio Server With a TLS Certificate

sudo ufw allow 80

sudo ufw allow 443

sudo ufw status verbose

install Certbot. Since Certbot maintains a separate PPA repository, you will first have to add it to your list of repositories before installing Certbot as shown:

To prepare to add the PPA repository, first install software-properties-common, a package for managing PPAs:

1
2

sudo apt install software-properties-common

This package provides some useful scripts for adding and removing PPAs instead of doing it manually.

Now add the Universe repository:

1
sudo add-apt-repository universe

This repository contains free and open source software maintained by the Ubuntu community, but is not officially maintained by Canonical, the developers of Ubuntu. This is where we will find the repository for Certbot.

Next, add the Certbot repository:

1
sudo add-apt-repository ppa:certbot/certbot

sudo apt update

install certbot

sudo apt install certbot

Since Ubuntu 18.04 doesn’t yet support automatic installation, you will use the certonly command and --standalone to obtain the certificate:

1
sudo certbot certonly --standalone -d minio-server.your_domain

--standalone means that this certificate is for a built-in standalone web server. For more information on this, see our How To Use Certbot Standalone Mode to Retrieve Let’s Encrypt SSL Certificates on Ubuntu 18.04 tutorial.

1
2
3
4
5
6

sudo cp /etc/letsencrypt/live/minio-server.your_domain_name/privkey.pem /etc/minio/certs/private.key
sudo cp /etc/letsencrypt/live/minio-server.your_domain_name/fullchain.pem /etc/minio/certs/public.crt


sudo chown minio-user:minio-user /etc/minio/certs/private.key

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

mc mb minio/bucketAuto
mc: <ERROR> Unable to make bucket `minio/bucketAuto`. Bucket name contains invalid characters
[root@localhost ~]# mc mb minio/bucketauto
Bucket created successfully `minio/bucketauto`.

mc admin user add TARGET ACCESSKEY SECRETKEY

ACCESSKEY:
Also called as username.

SECRETKEY:
Also called as password.

1. Add a new user 'foobar' to MinIO server.
$ set +o history
$ mc admin user add myminio foobar foo12345
$ set -o history
2. Add a new user 'foobar' to MinIO server, prompting for keys.
$ mc admin user add myminio
Enter Access Key: foobar
Enter Secret Key: foobar12345
3. Add a new user 'foobar' to MinIO server using piped keys.
$ set +o history
$ echo -e "foobar\nfoobar12345" | mc admin user add myminio
$ set -o history

Step 3 - Create the policy to grant access to the bucket local/wifey
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::wifey/*", "arn:aws:s3:::wifey"
],
"Sid": "BucketAccessForUser"
}
]
}

Add policy to your minio instance
mc admin policy add local wifey-bucket-policy /tmp/sample-bucket-policy.txt

Associate policy with your user
mc admin policy set local wifey-bucket-policy user=wifey-user

Now the credentials that you share with a user will only allow them to access this one bucket.

mc admin user info local wifey-user
AccessKey: wifey-user
Status: enabled
PolicyName: wifey-bucket-policy
MemberOf:

http://www.gnu.org/software/gcc/gcc-10/
gcc 10.2 http://ftp.tsukuba.wide.ad.jp/software/gcc/releases/gcc-10.2.0/
https://bigsearcher.com/mirrors/gcc/releases/gcc-10.2.0/gcc-10.2.0.tar.gz
obtain GCC please use our mirror sites or our version control system.

编译gcc 10.2 on centos7

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

gcc-10.2.0]# ./contrib/download_prerequisites
[root@localhost gcc-10.2.0]# ./contrib/download_prerequisites
2020-08-10 20:07:51 URL:http://gcc.gnu.org/pub/gcc/infrastructure/gmp-6.1.0.tar.bz2 [2383840/2383840] -> "./gmp-6.1.0.tar.bz2" [1]
2020-08-10 20:09:55 URL:http://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 [1279284/1279284] -> "./mpfr-3.1.4.tar.bz2" [1]
2020-08-10 20:10:53 URL:http://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz [669925/669925] -> "./mpc-1.0.3.tar.gz" [1]
2020-08-10 20:13:20 URL:http://gcc.gnu.org/pub/gcc/infrastructure/isl-0.18.tar.bz2 [1658291/1658291] -> "./isl-0.18.tar.bz2" [1]
gmp-6.1.0.tar.bz2: OK
mpfr-3.1.4.tar.bz2: OK
mpc-1.0.3.tar.gz: OK
isl-0.18.tar.bz2: OK
All prerequisites downloaded successfully.

# ll
.... # 可以看到自动下载了 gmp, mpfr, mpc, isl
lrwxrwxrwx. 1 root root 12 Aug 10 20:13 gmp -> ./gmp-6.1.0/
lrwxrwxrwx. 1 root root 13 Aug 10 20:13 mpfr -> ./mpfr-3.1.4/
lrwxrwxrwx. 1 root root 12 Aug 10 20:13 mpc -> ./mpc-1.0.3/
lrwxrwxrwx. 1 root root 11 Aug 10 20:13 isl -> ./isl-0.18/

mkdir gccBuild
cd gccBuild
../gcc-10.2.0/configure --prefix=/opt/gcc-10.2.0 --enable-languages=c,c++,go --disable-multilib
# --disable-multilib 表示不支持32位的gcc
# make
# make install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

../gcc-10.2.0/configure --prefix=/share/lord/toolchain/gcc-10.2.0 --enable-languages=c,c++,go


➜ /bin alias sha512sum='/share/CACHEDEV1_DATA/.qpkg/container-station/bin/busybox sha512sum'
➜ /bin sha512sum --help
BusyBox v1.22.1 (2019-07-11 04:19:06 UTC) multi-call binary.

Usage: sha512sum [-c[sw]] [FILE]...

Print or check SHA512 checksums

-c Check sums against list in FILEs
-s Don't output anything, status code shows success
-w Warn about improperly formatted checksum lines

➜ /bin

configure: error: Building GCC requires GMP 4.2+, MPFR 3.1.0+ and MPC 0.8.0+.
Try the --with-gmp, --with-mpfr and/or --with-mpc options to specify
their locations.


./contrib/download_prerequisites
./contrib/download_prerequisites: line 234: sha512sum: command not found
error: Cannot verify integrity of possibly corrupted file gmp-6.1.0.tar.bz2

Image from usb3.com

A USB 3.0 port is actually seen as 2 ports to macOS: a USB 2.0 and USB 3.0
Apple choose 15 ports as the limit.USB Hubs attached to one of your USB controller’s ports have a different kind of port limit. In total, a single USB port can be split into 127 ports. This includes USB hubs attached to USB hubs

[TOC]

Running Custom Containers Under Chrome OS
Google working on new way to run Android apps in Chrome OS called ‘ARCVM’
https://en.wikipedia.org/wiki/Chrome_OS#cite_note-GTOS-8
https://chromium.googlesource.com/chromiumos/docs/+/master/containers_and_vms.md#don_t-android-apps-arc_run-in-a-container-and-not-a-vm
Install Android apps on your Chromebook
https://developer.android.com/topic/arc/device-support
https://lwn.net/Articles/701964/
/usr/bin/vm_concierge
official Doc https://github.com/sebanc/brunch
sebanc/brunch: Boot ChromeOS on x86_64 PC … - GitHub
https://github.com/sebanc/brunch/releases
https://cros-updates-serving.appspot.com/
https://www.eevblog.com/forum/programming/install-official-google-chrome-os-on-pc-laptop-with-play-store-and-linux!/
https://zhuanlan.zhihu.com/p/161247724
https://www.xda-developers.com/fydeos-chrome-os-brings-android-apps-pc/
https://fydeos.com/instructions-pc
https://faq.fydeos.com/en/getting-started/install-fydeos-to-hdd/
How to Install Chromium OS on Raspberry Pi
chromium https://www.makeuseof.com/tag/download-google-chrome-os-and-run-on-a-real-computer/
安装篇 完整版 Chrome OS 安装指南
Dual Boot Chrome OS and Windows 10
chrome OS
blog ChromeOS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

If you are in developer mode, open a crosh (control-alt-T) and issue
> the command "shell". Then you can become root (sudo bash) and set
> passwords for the root and chronos users as you please.
>
It's probably worth adding: Generally, being in dev mode isn't
enough to be able to log in to a Chrome OS device via ssh.
The upstart job that starts sshd isn't present in a base Chrome OS
image. If you want sshd to be running on your device, then in addition
to switching to dev mode you'll need to do one of the following:
1) Build and install your own Chromium OS image.
2) Disable rootfs verification on your device, and then copy the
openssh-server job into /etc/init.
3) Write a short script that does what the openssh-server job does,
install it in /usr/local/bin, and then run the script manually after you
boot.
1
2
3

mount / --remount -o rw
mount -o remount,rw /

The chrome book can support your use case natively, but it is a bit limited. There are some apps you can install that give more full features.

Inside the browser you can install a ssh client or use a limited, built-in shell by doing CTRL+ALT+T to open up a CROmium SHell (CROSH for short) . The terminal can only ssh into other machines and a few other things, a lot of the things that you expect from your standard bash are definitely missing.

That said, the chrome book is great for web browsing and just ssh-ing to other computers

The default behaviour for most Linux file systems is to safeguard your data. When the kernel detects an error in the storage subsystem it will make the filesystem read-only to prevent (further) data corruption.

You can tune this somewhat with the mount option errors={continue|remount-ro|panic} which are documented in the system manual (man mount).

When your root file-system encounters such an error, most of the time the error won’t be recorded in your log-files, as they will now be read-only too. Fortunately since it is a kernel action the original error message is recorded in memory first, in the kernel ring buffer. Unless already flushed from memory you can display the contents of the ring buffer with the dmesg command. .

Most real hard disks support SMART and you can use smartctl to try and diagnose the disk health.

Depending on the error messages, you could decide it is still safe to use file-system and return it read-write condition with mount -o remount,rw /

In general though, disk errors are a precursor to complete disk failure. Now is the time to create a back-up of your data or to confirm the status of your existing back-ups.

inside chromeOS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107

localhost /etc/init # uname -a
Linux localhost 4.19.122-brunch-sebanc #1 SMP PREEMPT Tue Jul 7 20:49:02 CEST 2020 x86_64 Intel(R) Core(TM) i5-9400 CPU @ 2.90GHz GenuineIntel GNU/Linux
localhost /etc/init # cat /etc/issue
Developer Console

To return to the browser, press:

[ Ctrl ] and [ Alt ] and [ <- ] (F1)

To use this console, the developer mode switch must be engaged.
Doing so will destroy any saved data on the system.

In developer mode, it is possible to
- login and sudo as user 'chronos'
- require a password for sudo and login(*)
- disable power management behavior (screen dimming):
sudo initctl stop powerd
- install your own operating system image!

* To set a password for 'chronos', run the following as root:

chromeos-setdevpasswd

If you are having trouble booting a self-signed kernel, you may need to
enable USB booting. To do so, run the following as root:

enable_dev_usb_boot

Have fun and send patches!

# cat /etc/os-release
NAME=Chrome OS
ID_LIKE=chromiumos
ID=chromeos
GOOGLE_CRASH_ID=ChromeOS
HOME_URL=https://www.chromium.org/chromium-os
BUG_REPORT_URL=https://crbug.com/new
VERSION=83
VERSION_ID=83
BUILD_ID=13020.87.0

localhost /etc/init # cat /etc/lsb-release
CHROMEOS_ARC_ANDROID_SDK_VERSION=28
CHROMEOS_ARC_VERSION=6612792
CHROMEOS_AUSERVER=https://block-tools.google.com/service/update2
CHROMEOS_BOARD_APPID={625849FA-56A0-4E67-9163-B89BE0C2A6AE}
CHROMEOS_CANARY_APPID={90F229CE-83E2-4FAF-8479-E368A34938B1}
CHROMEOS_DEVSERVER=
CHROMEOS_RELEASE_APPID={625849FA-56A0-4E67-9163-B89BE0C2A6AE}
CHROMEOS_RELEASE_BOARD=rammus-signed-mp-v2keys
CHROMEOS_RELEASE_BRANCH_NUMBER=87
CHROMEOS_RELEASE_BUILDER_PATH=rammus-release/R83-13020.87.0
CHROMEOS_RELEASE_BUILD_NUMBER=13020
CHROMEOS_RELEASE_BUILD_TYPE=Official Build
CHROMEOS_RELEASE_CHROME_MILESTONE=83
CHROMEOS_RELEASE_DESCRIPTION=13020.87.0 (Official Build) stable-channel rammus
CHROMEOS_RELEASE_KEYSET=mp-v2
CHROMEOS_RELEASE_NAME=Chrome OS
CHROMEOS_RELEASE_PATCH_NUMBER=0
CHROMEOS_RELEASE_TRACK=stable-channel
CHROMEOS_RELEASE_UNIBUILD=1
CHROMEOS_RELEASE_VERSION=13020.87.0
DEVICETYPE=CHROMEBOOK
GOOGLE_RELEASE=13020.87.0



localhost /etc/init # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether b4:2e:99:d6:9d:da brd ff:ff:ff:ff:ff:ff
inet 192.168.123.154/24 brd 192.168.123.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b62e:99ff:fed6:9dda/64 scope link
valid_lft forever preferred_lft forever
3: arcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:66:e7:ad:9a:fc brd ff:ff:ff:ff:ff:ff
inet 100.115.92.1/30 brd 100.115.92.3 scope global arcbr0
valid_lft forever preferred_lft forever
inet6 fe80::609f:cdff:fec3:6a4f/64 scope link
valid_lft forever preferred_lft forever
4: veth_arc0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master arcbr0 state UP group default qlen 1000
link/ether 16:66:e7:ad:9a:fc brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1466:e7ff:fead:9afc/64 scope link
valid_lft forever preferred_lft forever
5: arc_eth0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:97:a5:85:46:25 brd ff:ff:ff:ff:ff:ff
inet 100.115.92.9/30 brd 100.115.92.11 scope global arc_eth0
valid_lft forever preferred_lft forever
inet6 fe80::6056:84ff:fe63:9da7/64 scope link
valid_lft forever preferred_lft forever
6: veth_eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master arc_eth0 state UP group default qlen 1000
link/ether 02:97:a5:85:46:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::97:a5ff:fe85:4625/64 scope link
valid_lft forever preferred_lft forever
9: vmtap0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ce:7a:5b:63:f6:9e brd ff:ff:ff:ff:ff:ff
inet 100.115.92.25/30 brd 100.115.92.27 scope global vmtap0
valid_lft forever preferred_lft forever
inet6 fe80::cc7a:5bff:fe63:f69e/64 scope link
valid_lft forever preferred_lft forever

Minijail

Minijail is a sandboxing and containment tool used in Chrome OS and Android. It provides an executable that can be used to launch and sandbox other programs, and a library that can be used by code to sandbox itself.

paravirtualized crosVM writen by Rust

https://opensource.google/projects/crosvm

https://chromium.googlesource.com/chromiumos/platform/crosvm/

crosvm is a virtual machine monitor (VMM) based on Linux’s KVM hypervisor, with a focus on simplicity, security, and speed. crosvm is intended to run Linux guests, originally as a security boundary for running native applications on the Chrome OS platform. Compared to QEMU, crosvm doesn’t emulate architectures or real hardware, instead concentrating on paravirtualized devies, such as the virtio standard.

Repo License
crosvm BSD 3-clause
1
2
3


crosvm 30088 30073 0 Jul28 ? 00:00:01 /usr/bin/crosvm run --cpus 6 --mem 30968 --tap-fd 14 --cid 5 --socket /run/vm/vm.TLYSX3/crosvm.sock --wayland-sock /run/chrome/wayland-0 --serial type=syslog,num=1 --syslog-tag VM(5) --params snd_intel8x0.inside_vm=1 snd_intel8x0.ac97_clock=48000 --pmem-device /run/imageloader/cros-termina/13018.0.0/vm_rootfs.img --params root=/dev/pmem0 ro rootflags=dax --wayland-dmabuf --gpu --ac97 backend=cras --disk /run/imageloader/cros-termina/13018.0.0/vm_tools.img,sparse=false --rwdisk /home/root/39e8b44b05905c9d6e9cb6d264848b0c7aeac593/crosvm/dGVybWluYQ==.img,sparse=true /run/imageloader/cros-termina/13018.0.0/vm_kernel

sshd on chromeOS

SSH Daemon (Samsung Chromebook Exynos 5250)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30


$sudo passwd root

$ su -

#mkdir -p /mnt/stateful_partition/etc/ssh

#A couple SSH keys need to be generated for sshd to use.

#ssh-keygen -t dsa -f /mnt/stateful_partition/etc/ssh/ssh_host_ed25519_key
#ssh-keygen -t rsa -f /mnt/stateful_partition/etc/ssh/ssh_host_rsa_key


total 16
-rw-------. 1 root root 1385 Jul 29 02:51 ssh_host_ed25519_key
-rw-r--r--. 1 root root 604 Jul 29 02:51 ssh_host_ed25519_key.pub
-rw-------. 1 root root 2602 Jul 29 02:50 ssh_host_rsa_key
-rw-r--r--. 1 root root 568 Jul 29 02:50 ssh_host_rsa_key.pub

#/sbin/iptables -A INPUT -p tcp --dport 22 -j ACCEPT

At this point you should be able to login from a remote machine via ssh. The last step it to have sshd start automatically on system startup. This can be accomplished by adding a script under the /etc/init directory. I called mine sshd.conf; it contains the following lines.
cat > sshd.conf <<-'EOF'
start on started system-services
script
/sbin/iptables -A INPUT -p tcp --dport 22 -j ACCEPT
/usr/sbin/sshd
end script
EOF

Base hardware compatibility:

ChromeOS recovery images

2 types of ChromeOS recovery images exist and use different device configuration mechanisms:

  • non-unibuild images: configured for single device configurations like eve (Google Pixelbook) and nocturne (Google Pixel Slate) for example.
  • unibuild images: intended to manage multiple devices through the use of the CrosConfig tool.

Contrarily to the Croissant framework which mostly supports non-unibuilds images (configuration and access to android apps), Brunch should work with both but will provide better hardware support for unibuild images.

Currently:

  • “rammus” is the recommended image for devices with 4th generation Intel CPU and newer.
  • “samus” is the recommended image for devices with 3rd generation Intel CPU and older.
  • “grunt” is the image to use if you have supported AMD harware.

ChromeOS recovery images can be downloaded from here: https://cros-updates-serving.appspot.com/

image-20200728122458773
https://dl.google.com/dl/edgedl/chromeos/recovery/chromeos_13020.87.0_rammus_recovery_stable-channel_mp-v2.bin.zip

Download Links

1) Linux Mint https://www.linuxmint.com/download.php

2) Rufus https://rufus.ie/

3) Brunch https://github.com/sebanc/brunch/releases

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18


sudo apt-get install pv
sudo apt-get install cgpt


# ==pending== https://github.com/shrikant2002/ChromeOS/blob/master/install.sh
#!/bin/sh
# SUBSCRIBE to Kedar Nimbalkar on youtube for more such videos https://www.youtube.com/user/kedar123456889
sudo apt-get update
sudo apt-get install figlet
sudo apt-get install pv
sudo apt-get install cgpt
sudo figlet -c "SUBSCRIBE TO"
sudo figlet -c Kedar
sudo figlet -c Nimbalkar
sudo echo https://www.youtube.com/user/kedar123456889
sudo bash chromeos-install.sh -src rammus_recovery.bin -dst /dev/sda

Extend Reading:

The Chromebook Linux Shell
image-20200729175131985
https://developer.android.com/studio/install#chrome-os
What’s new in Android apps for Chrome OS (Google I/O ‘18
https://developer.android.com/topic/arc
Chromebook 在容器中运行整个 Android 操作系统,这一点类似于 Docker 或 LXC
2015” Google created a way for the Chrome browser to run Android apps, called Android Runtime for Chrome (ARC)