用户和综合分析系统
项目背景
近年来,伴随着互联网金融的风生水起;国家出台相关文件,要求加大互联网交易风险防控力度;鼓励通过大数据分析、用户行为建模等手段建立和完善交易风险检测模型。但是目前大数据风控还存在时效性差,准确性不高等问题。综合用户分析平台包含 综合数据分析|登陆风险|注册风险|交易风险|活动风险分析等模块。以下是个各个子系统之间的关系。
- 业务系统:通常指的是APP+后台或Web端(服务目标用户),是业务模块的载体,大数据计算的数据来源。
- 风控系统(流计算):为业务系统提供支持(服务目标是后台服务),根据业务系统搜集来得数据数据或者在页面埋藏的一些
去获取用户使用环境或者用户操作行为。埋点
- 惩罚系统:根据风险评估系统计算出的
、根据报告配合具体的惩罚业务对事件产生者进行控制或者惩罚。例如:增加验证码、禁止用户登录、注册或者下单。报告
- 综合分析: 辅助风控系统,根据分控系统的表现,判断的风控的策略或者参数设置是否存在问题。比如:评估因子触发率极低或者极高等,该系统的可以有效的帮助的运营和分析人员完成新策略的发现以及营销手段改良。
平台架构
系统模型
实时业务
注册风险
检测出一些不正常的注册行为,并产生注册评估报告。
- 短时间内同一个设备在同一个网络下注册数据超过N个
- 如果注册时间少于m分钟,且产生多次注册行为,认定注册有异常.
- 同一个设备或者IP,规定时间内多次调用
|短信
超过N次 ,注册风险邮件服务
√登录风险
检查出非用户登录,或者账号被盗窃等潜在风险挖掘。
- 异地登录认定有风险(不在用户经常登陆地)
- 登录的位移速度超过正常值,认定有风险
- 更换设备,认定有风险
- 登录时段-习惯,出现了异常,存在风险
- 每天的累计登录次数超过上限,认定有风险
- 密码输入错误或者输入差异性较大,认定风险
- 如果用户的输入特征(输入每个控件所需时长 ms)发生变化
交易风险
辅助系统检测用户支付环境,地区、网络、次数
- 用户交易习惯,出现在异常时段认定有异常
- 检测用户的网络环境,如果不是经常使用WI-FI
- 短时间内产生多次交易,存在风险
- 支付金额过大,认定有风险
- 陌生地区支付,认定有风险
- 交易设备发生变化
活动风险
活动风险涉及活动时间、奖励兑换/兑换率
- 活动时间提前结束,认定有风险。
- 活动短时间内兑换率过高。
- 一个账号多次兑换认定有风险。
离线业务
辅助流计算或者企业运营部更好经营业务。
分析平台用户成分(性别、设备、年龄段、地域)
应用使用习惯(时间、类别|频道)
频道或者类别点击排行榜,页面停留时长、跳出率
风控系统中各类评估因子在每个应用的应用表现
Java业务板块 (阶段1)
工作计划表
任务计划 | 工时安排 | 备注 |
---|---|---|
SpringBoot快速构建Web-Model后端 Restful 接口发布 | 1天-编码 | |
RestTemplate、WebClient/Postman 单元测试 | 1天-讲解 | |
基于EasyUI对接Restful接口完成前后端对接 | 3天-70%讲解、30%实战 | |
jQuery插件定制和和埋点设计 | 0.5天-100%讲解 | |
jQuery自定义插件和和埋点设计 | 1.5天-70%讲解、30%实战 | |
Echars/HighCharts基础报表可视化展示(静态) | 1.5天-30%讲解、70%实战 | |
对接SpringCloud完成微服务的开发与部署 | 3天- 70%讲解 30%实战 |
Notes:辅线,不会涉及Java中过多复杂业务场景,仅仅作为JavaWeb业务基础,用于展示大数据可视化平台,以及了解什么是微服务开发。
SpringBoot快速构建Web-User-Model后端 Restful 接口发布
<groupId>com.baizhi</groupId>
<artifactId>UserModel</artifactId>
<version>1.0-SNAPSHOT</version>
1、要求SpringBoot版本
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.5.RELEASE</version>
</parent>
2、给出以下脚本
DROP TABLE IF EXISTS t_user;
set character_set_results=utf8;
set character_set_client=utf8;
CREATE TABLE t_user (
id int primary key AUTO_INCREMENT,
name varchar(32) unique ,
password varchar(128) ,
sex tinyint(1) ,
photo varchar(255) ,
birthDay date,
email varchar(128)
) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8;
3、实体类
public class User implements Serializable {
private Integer id;
private String name;
private boolean sex;
private String password;
private Date birthDay;
private String photo;
private String email;
...
}
4、DAO接口
public interface IUserDAO {
void saveUser(User user);
User queryUserByNameAndPassword(User user);
User queryUserById(Integer id);
void deleteByUserId(Integer id);
List<User> queryUserByPage(
@Param(value = "pageNow") Integer pageNow,
@Param(value = "pageSize") Integer pageSize,
@Param(value = "column") String column,
@Param(value = "value") Object value);
int queryCount(
@Param(value = "column") String column,
@Param(value = "value") Object value);
void updateUser(User user);
}
5、Service接口
public interface IUserService {
/**
* 保存用户
* @param user
*/
void saveUser(User user);
/**
* 根据密码和用户名查询用户
* @param user
* @return
*/
User queryUserByNameAndPassword(User user);
/***
*
* @param pageNow
* @param pageSize
* @param column 模糊查询列
* @param value 模糊值
* @return
*/
List<User> queryUserByPage(Integer pageNow, Integer pageSize,
String column, Object value);
/**
* 查询用户总记录
* @param column
* @param value
* @return
*/
int queryUserCount(String column, Object value);
/**
* 根据ID查询用户信息
* @param id
* @return
*/
User queryUserById(Integer id);
/**
* 根据IDS删除用户
* @param ids
*/
void deleteByUserIds(Integer[] ids);
/**
* 更新用户信息
* @param user
*/
void updateUser(User user);
}
6、控制层访问接口
@RestController
@RequestMapping(value = "/formUserManager")
public class FormUserController {
@PostMapping(value = "/registerUser")
public User registerUser(User user,
@RequestParam(value = "multipartFile",required = false) MultipartFile multipartFile) throws IOException
@PostMapping(value = "/userLogin")
public User userLogin(User user)
@PostMapping(value = "/addUser")
public User addUser(User user,
@RequestParam(value = "multipartFile",required = false) MultipartFile multipartFile) throws IOException
@PutMapping(value = "/updateUser")
public void updateUser(User user,
@RequestParam(value = "multipartFile",required = false) MultipartFile multipartFile) throws IOException
@DeleteMapping(value = "/deleteUserByIds")
public void delteUserByIds(@RequestParam(value = "ids") Integer[] ids)
@GetMapping(value = "/queryUserByPage")
public List<User> queryUserByPage(@RequestParam(value = "page",defaultValue = "1") Integer pageNow,
@RequestParam(value = "rows",defaultValue = "10") Integer pageSize,
@RequestParam(value = "column",required = false) String column,
@RequestParam(value = "value",required = false) String value)
@GetMapping(value = "/queryUserCount")
public Integer queryUserCount( @RequestParam(value = "column",required = false) String column,
@RequestParam(value = "value",required = false) String value)
@GetMapping(value = "/queryUserById")
public User queryUserById(@RequestParam(value = "id") Integer id)
}
构建项目
项目基本结构
配置⽂件清单
- maven pom依赖
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.baizhi</groupId>
<artifactId>UserModel</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<name>UserModel Maven Webapp</name>
<!-- FIXME change it to the project's website -->
<url>http://www.example.com</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
</properties>
<!-- 仲裁中心 -->
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.5.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<!-- web启动器 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- msq -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.12</version>
</dependency>
<!-- 阿里连接池 -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.0.31</version>
</dependency>
<!-- mybatis -->
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>1.3.3</version>
</dependency>
<!-- 使内嵌的tomcat支持解析jsp -->
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<!-- jstl -->
<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
<!-- 测试 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- aop -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>5.1.5.RELEASE</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis-spring</artifactId>
<version>1.3.2</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis</artifactId>
<version>3.4.6</version>
<scope>compile</scope>
</dependency>
<!-- 文件支持 -->
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.3</version>
</dependency>
<!-- lombok相关 -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.4</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.47</version>
</dependency>
<!--Spring Redis RedisAutoConfiguration-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>3.2.0</version>
</dependency>
</dependencies>
<build>
<finalName>UserModel</finalName>
<plugins>
<!-- springbootjar包支持 -->
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>1.4.2.RELEASE</version>
</plugin>
</plugins>
</build>
</project>
- application.yml
# 访问接口配置
server:
port: 8989
servlet:
context-path: /UserModel
spring:
# 数据源配置
datasource:
type: com.alibaba.druid.pool.DruidDataSource
username: root
password: root
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://CentOS:3306/usermodel?useSSL=false&characterEncoding=UTF8&serverTimezone=GMT
# 上传文件控制
servlet:
multipart:
max-request-size: 50MB
max-file-size: 50MB
enabled: true
# 乱码解决
http:
encoding:
charset: utf-8
# 连接redis
redis:
host: CentOS
port: 6379
# Mybatis配置
mybatis:
mapper-locations: classpath:mapper/*.xml
type-aliases-package: com.baizhi.entity
# //在resource目录下建立config文件夹
config-location: classpath:config/mybatis.xml
# 开启Mybatis批处理模式
logging:
level:
root: info
- UserDAO.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.baizhi.dao.IUserDAO">
<cache type="com.baizhi.cache.UserDefineRedisCache"></cache>
<sql id="all">
id, name, password, sex, photo, birthDay, email
</sql>
<resultMap type="com.baizhi.entity.User" id="TUserMap">
<result property="id" column="id" jdbcType="INTEGER"/>
<result property="name" column="name" jdbcType="VARCHAR"/>
<result property="password" column="password" jdbcType="VARCHAR"/>
<result property="sex" column="sex" jdbcType="OTHER"/>
<result property="photo" column="photo" jdbcType="VARCHAR"/>
<result property="birthDay" column="birthDay" jdbcType="OTHER"/>
<result property="email" column="email" jdbcType="VARCHAR"/>
</resultMap>
<!--查询单个-->
<select id="queryUserById" resultMap="TUserMap">
select
<include refid="all"></include>
from usermodel.t_user
where id = #{id}
</select>
<!--查询指定行数据-->
<select id="queryUserByPage" resultMap="TUserMap">
select <include refid="all"></include> from usermodel.t_user
<where>
<if test="column != null and column != '' and value != ''">
${column} like "%"#{value}"%"
</if>
</where>
limit #{pageNow},#{pageSize}
</select>
<!--通过姓名密码筛选条件查询-->
<select id="queryUserByNameAndPassword" resultMap="TUserMap">
select
<include refid="all"></include>
from usermodel.t_user
<where>
<if test="name != null and name != ''">
and name = #{name}
</if>
<if test="password != null and password != ''">
and password = #{password}
</if>
</where>
</select>
<!--添加用户-->
<insert id="saveUser" keyProperty="id" useGeneratedKeys="true">
insert into usermodel.t_user(name, password, sex, photo, birthDay, email)
values (#{name}, #{password}, #{sex}, #{photo}, #{birthDay}, #{email})
</insert>
<!--修改数据-->
<update id="updateUser">
update usermodel.t_user
<set>
<if test="name != null and name != ''">
name = #{name},
</if>
<if test="password != null and password != ''">
password = #{password},
</if>
<if test="sex != null">
sex = #{sex},
</if>
<if test="photo != null and photo != ''">
photo = #{photo},
</if>
<if test="birthDay != null">
birthDay = #{birthDay},
</if>
<if test="email != null and email != ''">
email = #{email},
</if>
</set>
where id = #{id}
</update>
<!--通过主键删除-->
<delete id="deleteByUserId">
delete from usermodel.t_user where id = #{id}
</delete>
<!-- 查询总行数 -->
<select id="queryCount" resultType="Integer">
select count(*) from t_user
<where>
<if test="column != null and column != '' and value != ''">
${column} like "%"#{value}"%"
</if>
</where>
</select>
</mapper>
- logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/userLoginFile-%d{yyyyMMdd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%p %c#%M %d{yyyy-MM-dd HH:mm:ss} %m%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<!-- 控制台输出⽇志级别 -->
<root level="ERROR">
<appender-ref ref="STDOUT"/>
</root>
<logger name="org.springframework.jdbc" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
</logger>
<logger name="com.baizhi.dao" level="TRACE" additivity="false">
<appender-ref ref="STDOUT"/>
</logger>
<logger name="com.baizhi.cache" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
</logger>
</configuration>
- t_user.sql
DROP TABLE IF EXISTS t_user;
set character_set_results=utf8;
set character_set_client=utf8;
CREATE TABLE t_user (
id int primary key AUTO_INCREMENT,
name varchar(32) unique ,
password varchar(128) ,
sex tinyint(1) ,
photo varchar(255) ,
birthDay date,
email varchar(128)
) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8;
项⽬代码清单
- User实体类
/**
* (TUser)实体类
*
* @author makejava
* @since 2020-03-16 16:54:52
*/
@Data
@AllArgsConstructor
@NoArgsConstructor
public class User implements Serializable {
private static final long serialVersionUID = 250284593728595695L;
private Integer id;
private String name;
private boolean sex;
private String password;
@DateTimeFormat(pattern = "yyyy-MM-dd")
@JsonFormat(pattern = "yyyy-MM-dd", timezone = "GMT+8")
private Date birthDay;
private String photo;
private String email;
}
- Dao接口
/**
* (TUser)DAO接口
*
* @author makejava
* @since 2020-03-16 16:54:52
*/
public interface IUserDAO {
/**
* 添加用户信息
*
* @param user
*/
void saveUser(User user);
/**
* 根据用户名称及密码查询用户
*
* @param user
* @return
*/
User queryUserByNameAndPassword(User user);
/**
* 根基用户ID查询
*
* @param id
* @return
*/
User queryUserById(Integer id);
/**
* 删除用户
*
* @param id
*/
void deleteByUserId(Integer id);
/**
* 分页查询用户
*
* @param pageNow
* @param pageSize
* @param column
* @param value
* @return
*/
List<User> queryUserByPage(
@Param(value = "pageNow") Integer pageNow,
@Param(value = "pageSize") Integer pageSize,
@Param(value = "column") String column,
@Param(value = "value") Object value);
/**
* 查询总行数
*
* @param column
* @param value
* @return
*/
int queryCount(
@Param(value = "column") String column,
@Param(value = "value") Object value);
/**
* 修改用户信息
*
* @param user
*/
void updateUser(User user);
}
- Service接口
/**
* (TUser)表服务接口
*
* @author makejava
* @since 2020-03-16 16:54:52
*/
public interface IUserService {
/**
* 添加用户
*
* @param user
*/
void saveUser(User user);
/**
* 根据密码和用户名查询用户
*
* @param user
* @return
*/
User queryUserByNameAndPassword(User user);
/***
*
* @param pageNow
* @param pageSize
* @param column 模糊查询列
* @param value 模糊值
* @return
*/
List<User> queryUserByPage(Integer pageNow, Integer pageSize,
String column, Object value);
/**
* 查询用户总记录
*
* @param column
* @param value
* @return
*/
int queryUserCount(String column, Object value);
/**
* 根据ID查询用户信息
*
* @param id
* @return
*/
User queryUserById(Integer id);
/**
* 根据IDS删除用户
*
* @param ids
*/
void deleteByUserIds(Integer[] ids);
/**
* 更新用户信息
*
* @param user
*/
void updateUser(User user);
}
- Service实现类
/**
* (TUser)表服务实现类
*
* @author makejava
* @since 2020-03-16 16:54:52
*/
@Service("tUserService")
@Transactional
public class IUserServiceImpl implements IUserService {
@Resource
private IUserDAO iUserDAO;
/**
* 添加用户
*
* @param user
*/
@Override
public void saveUser(User user) {
iUserDAO.saveUser(user);
}
/**
* 根据条件查询
*
* @param user
* @return
*/
@Override
@Transactional(propagation = Propagation.SUPPORTS, readOnly = true)
public User queryUserByNameAndPassword(User user) {
User user1 = iUserDAO.queryUserByNameAndPassword(user);
return user1;
}
/**
* 分页查询
*
* @param pageNow
* @param pageSize
* @param column 模糊查询列
* @param value 模糊值
* @return
*/
@Override
@Transactional(propagation = Propagation.SUPPORTS, readOnly = true)
public List<User> queryUserByPage(Integer pageNow, Integer pageSize, String column, Object value) {
int i = pageNow - 1;
int i1 = i * pageSize;
List<User> users = iUserDAO.queryUserByPage(i1, pageSize, column, value);
return users;
}
/**
* 统计总行数
*
* @param column
* @param value
* @return
*/
@Override
@Transactional(propagation = Propagation.SUPPORTS, readOnly = true)
public int queryUserCount(String column, Object value) {
int count = iUserDAO.queryCount(column, value);
return count;
}
/**
* 根据ID查询用户
*
* @param id
* @return
*/
@Override
@Transactional(propagation = Propagation.SUPPORTS, readOnly = true)
public User queryUserById(Integer id) {
User user = iUserDAO.queryUserById(id);
return user;
}
/**
* 删除用户
*
* @param ids
*/
@Override
public void deleteByUserIds(Integer[] ids) {
for (Integer id : ids) {
iUserDAO.deleteByUserId(id);
}
}
/**
* 修改用户信息
*
* @param user
*/
@Override
public void updateUser(User user) {
iUserDAO.updateUser(user);
}
}
- Application-springboot入口类
package com.baizhi;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.context.annotation.Bean;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
@SpringBootApplication
@MapperScan(basePackages = "com.baizhi.dao")
public class Application extends SpringBootServletInitializer {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
public SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
return builder.sources(Application.class);
}
@Bean
public RedisTemplate redisTemplate() {
return new RedisTemplate();
}
@Bean
public RedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate redisTemplate = new RedisTemplate();
redisTemplate.setConnectionFactory(redisConnectionFactory);
redisTemplate.setValueSerializer(new GenericJackson2JsonRedisSerializer());
return redisTemplate;
}
}
集成Junit测试
- 创建测试配置类
- Dao测试
/**
* UserDao测试类
*/
@RunWith(value = SpringRunner.class)
@SpringBootTest
@Slf4j
public class UserDaoTest {
@Resource
private IUserDAO iUserDAO;
@Test
public void queryById() {
System.out.println(iUserDAO.queryUserById(3));
}
@Test
public void queryAll() {
List<User> users = iUserDAO.queryUserByPage(0, 3, "name", "z");
for (User user : users) {
System.out.println(user);
}
}
@Test
public void addUser() {
User user = new User();
user.setName("张三");
user.setPassword("123123");
Date date = new Date();
user.setBirthDay(date);
iUserDAO.saveUser(user);
}
@Test
public void twoCacheTest() {
User user1 = iUserDAO.queryUserById(1);
log.info("user1:{}", JSON.toJSONString(user1));
log.info("第一次查询");
User user2 = iUserDAO.queryUserById(1);
log.info("user2:{}", JSON.toJSONString(user2));
log.info("第二次查询");
User user3 = iUserDAO.queryUserById(1);
log.info("user3:{}", JSON.toJSONString(user3));
log.info("第三次查询");
user1.setName("test1");
iUserDAO.updateUser(user1);
User user4 = iUserDAO.queryUserById(1);
log.info("user4:{}", JSON.toJSONString(user4));
log.info("第四次查询");
}
}
- Service测试
package com.baizhi.service;
import com.baizhi.entity.User;
import com.baizhi.service.impl.IUserServiceImpl;
import lombok.extern.slf4j.Slf4j;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.Date;
import java.util.List;
import static org.junit.Assert.*;
/**
* UserService测试类
*/
@RunWith(value = SpringRunner.class)
@SpringBootTest
@Slf4j
public class UserServiceTest {
@Autowired
private IUserService userService;
@Test
public void saveUserTest() {
User user = new User(null, "赵⼩六", true, "123456", new Date(), "aa.png", "qq.com");
userService.saveUser(user);
assertNotNull("⽤户ID不为空", user.getId());
}
@Test
public void queryUserByNameAndPasswordTests() {
User loginUser = new User();
loginUser.setName("赵⼩六");
loginUser.setPassword("123456");
User queryUser = userService.queryUserByNameAndPassword(loginUser);
assertNotNull("⽤户ID不为空", queryUser.getId());
}
@Test
public void queryUserByPageTests() {
Integer pageNow = 1;
Integer pageSize = 10;
String column = "name";
String value = "⼩";
List<User> userList = userService.queryUserByPage(pageNow, pageSize, column,
value);
assertFalse(userList.isEmpty());
}
@Test
public void queryUserCountTest() {
String column = "name";
String value = "⼩";
Integer count = userService.queryUserCount(column, value);
assertTrue(count != 0);
}
@Test
public void queryUserById() {
Integer id = 2;
User u = userService.queryUserById(id);
assertNotNull(u.getName());
}
@Test
public void deleteByUserIdsTests() {
userService.deleteByUserIds(new Integer[]{1, 4, 3});
}
@Test
public void updateUserTests() {
Integer id = 2;
User u = userService.queryUserById(id);
assertNotNull(u.getName());
u.setSex(false);
userService.updateUser(u);
User newUser = userService.queryUserById(2);
assertEquals("⽤户sex", false, newUser.getSex());
}
}
Notes:这⾥使⽤Junit测试的Assert(断⾔测试),需要额外的导⼊ import static org.junit.Assert.*;
RestControler发布与测试
RestControler
- form表单参数接收
/**
* (TUser)表控制层
*
* @author makejava
* @since 2020-03-16 16:54:52
*/
@RestController
@RequestMapping(value = "/formUserManager")
public class FormUserController {
private static final Logger LOGGER = LoggerFactory.getLogger(FormUserController.class);
@Autowired
private IUserServiceImpl tUserService;
/**
* 用户登陆
*
* @param user
* @return
*/
@PostMapping(value = "/userLogin")
public User userLogin(User user) {
return tUserService.queryUserByNameAndPassword(user);
}
/**
* 添加用户
*
* @param user
* @param multipartFile
* @return
* @throws IOException
*/
@PostMapping(value = "/addUser")
public User addUser(User user, @RequestParam(value = "multipartFile", required = false) MultipartFile multipartFile) throws IOException {
if (multipartFile != null) {
String filename = multipartFile.getOriginalFilename();
String suffix = filename.substring(filename.lastIndexOf("."));
File tempFile = File.createTempFile(filename.substring(0, filename.lastIndexOf(".")), suffix);
System.out.println(tempFile.getName());
tempFile.delete();
}
tUserService.saveUser(user);
return user;
}
/**
* 修改用户信息
*
* @param user
* @param multipartFile
* @throws IOException
*/
@PutMapping(value = "/updateUser")
public void updateUser(User user, @RequestParam(value = "multipartFile", required = false) MultipartFile multipartFile) throws IOException {
if (multipartFile != null) {
String filename = multipartFile.getOriginalFilename();
String suffix = filename.substring(filename.lastIndexOf("."));
File tempFile = File.createTempFile(filename.substring(0, filename.lastIndexOf(".")), suffix);
System.out.println(tempFile.getName());
tempFile.delete();
}
//更新用户信息
tUserService.updateUser(user);
}
/**
* 删除用户
*
* @param ids
*/
@DeleteMapping(value = "/deleteUserByIds")
public void delteUserByIds(@RequestParam(value = "ids") Integer[] ids) {
tUserService.deleteByUserIds(ids);
}
/**
* 模糊查询
*
* @param pageNow
* @param pageSize
* @param column
* @param value
* @return
*/
@GetMapping(value = "/queryUserByPage")
public List<User> queryUserByPage(@RequestParam(value = "page", defaultValue = "1") Integer pageNow,
@RequestParam(value = " ", defaultValue = "10") Integer pageSize,
@RequestParam(value = "column", required = false) String column,
@RequestParam(value = "value", required = false) String value) {
HashMap<String, Object> map = new HashMap<>();
map.put("total", tUserService.queryUserCount(column, value));
map.put("rows", tUserService.queryUserByPage(pageNow, pageSize, column, value));
return tUserService.queryUserByPage(pageNow, pageSize, column, value);
}
/**
* 统计总人数
*
* @param column
* @param value
* @return
*/
@GetMapping(value = "/queryUserCount")
public Integer queryUserCount(@RequestParam(value = "column", required = false) String column,
@RequestParam(value = "value", required = false) String value) {
return tUserService.queryUserCount(column, value);
}
/**
* 根据ID查询用户
*
* @param id
* @return
*/
@GetMapping(value = "/queryUserById")
public User queryUserById(@RequestParam(value = "id") Integer id) {
//从数据库中查询
return tUserService.queryUserById(id);
}
}
- json格式数据接收
/**
* (TUser)表控制层-JSON方式提交
*
* @author makejava
* @since 2020-03-16 16:54:52
*/
@RestController
@RequestMapping(value = "/restUserManager")
public class RestUserController {
private static Logger logger = LoggerFactory.getLogger(RestUserController.class);
@Autowired
private IUserServiceImpl iUserService;
/**
* 用户登陆
*
* @param user
* @return
*/
@PostMapping(value = "userLogin")
public User userLogin(@RequestBody User user) {
return iUserService.queryUserByNameAndPassword(user);
}
/**
* 注册用户
*
* @param user
* @param multipartFile
* @return
* @throws IOException
*/
@PostMapping(value = "addUser")
public User addUser(@RequestPart(value = "user") User user, @RequestParam(value = "multipartFile", required = false) MultipartFile multipartFile) throws IOException {
//判断上传的文件是否为空
if (multipartFile != null) {
//获取文件的原始名
String filename = multipartFile.getOriginalFilename();
//获取文件类型
String substring = filename.substring(filename.lastIndexOf("."));
//创建临时文件
File tempFile = File.createTempFile(filename.substring(0, filename.lastIndexOf(".")), substring);
//打印临时文件名
System.out.println(tempFile.getName());
//删除临时文件名
tempFile.delete();
}
iUserService.saveUser(user);
return user;
}
public void updataUser(@RequestPart(value = "user") User user, @RequestParam(value = "multipartFile", required = false) MultipartFile multipartFile) throws IOException {
//判断上传的文件是否为空
if (multipartFile != null) {
//获取文件的原始名
String filename = multipartFile.getOriginalFilename();
//获取文件类型
String substring = filename.substring(filename.lastIndexOf("."));
//创建临时文件
File tempFile = File.createTempFile(filename.substring(0, filename.lastIndexOf(".")), substring);
//打印临时文件名
System.out.println(tempFile.getName());
//删除临时文件名
tempFile.delete();
}
//修改用户信息
iUserService.updateUser(user);
}
/**
* 删除用户
*
* @param ids
*/
@DeleteMapping(value = "deleteUserByIds")
public void deleteUserByIds(@RequestParam(value = "ids") Integer[] ids) {
iUserService.deleteByUserIds(ids);
}
/**
* 根据id查一个
*
* @param id
* @return
*/
@GetMapping(value = "queryById")
public User queryById(@RequestParam(value = "id") Integer id) {
return iUserService.queryUserById(id);
}
/**
* 模糊分页查询
*
* @param pageNow
* @param pageSize
* @param column
* @param value
* @return
*/
@GetMapping(value = "/queryUserByPage")
public List<User> queryUserByPage(@RequestParam(value = "page", defaultValue = "1")
Integer pageNow,
@RequestParam(value = "rows", defaultValue =
"10") Integer pageSize,
@RequestParam(value = "column", required = false)
String column,
@RequestParam(value = "value", required = false)
String value) {
HashMap<String, Object> results = new HashMap<>();
results.put("total", iUserService.queryUserCount(column, value));
results.put("rows", iUserService.queryUserByPage(pageNow, pageSize, column, value));
return iUserService.queryUserByPage(pageNow, pageSize, column, value);
}
/**
* 统计总人数
*
* @param column
* @param value
* @return
*/
@GetMapping(value = "/queryUserCount")
public Integer queryUserCount(@RequestParam(value = "column", required = false) String column,
@RequestParam(value = "value", required = false) String value) {
return iUserService.queryUserCount(column, value);
}
}
RestTemplate
- FormUserControllerTests
/**
* controller测试类
*/
@SpringBootTest(classes = {Application.class})
@RunWith(SpringRunner.class)
public class FormUserControllerTests {
@Resource
private RestTemplate restTemplate;
private String urlPrefix = "http://127.0.0.1:8888/formUserManager";
/*
@PostMapping(value = "/registerUser")
public User registerUser(User user,
@RequestParam(value = "multipartFile",required = false)
MultipartFile multipartFile)
*/
@Test
public void testRegisterUser() {
String url = urlPrefix + "/registerUser";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>();
formData.add("name", "李⼩四");
formData.add("password", "123456");
formData.add("sex", "true");
formData.add("birthDay", "2018-01-26");
formData.add("photo", "user.png");
formData.add("email", "[email protected]");
//模拟⽂件上传
FileSystemResource fileSystemResource = new
FileSystemResource("/Users/admin/Desktop/head.png");
formData.add("multipartFile", fileSystemResource);
User user = restTemplate.postForObject(url, formData, User.class);
assertNotEquals("⽤户ID", user.getId());
}
/*
@PostMapping(value = "/userLogin")
public User userLogin(User user)
*/
@Test
public void testUserLogin() {
String url = urlPrefix + "/userLogin";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>
();
formData.add("name", "李⼩四");
formData.add("password", "123456");
User user = restTemplate.postForObject(url, formData, User.class);
assertNotEquals("⽤户ID", user.getId());
}
/*
@PostMapping(value = "/addUser")
public User addUser(User user,
@RequestParam(value = "multipartFile",required = false)
MultipartFile multipartFile) throws IOException
*/
@Test
public void testAddUser() {
String url = urlPrefix + "/addUser";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>();
formData.add("name", "赵晓丽");
formData.add("password", "123456");
formData.add("sex", "true");
formData.add("birthDay", "2018-01-26");
formData.add("photo", "user.png");
formData.add("email", "[email protected]");
//模拟⽂件上传
FileSystemResource fileSystemResource = new
FileSystemResource("/Users/admin/Desktop/head.png");
formData.add("multipartFile", fileSystemResource);
User user = restTemplate.postForObject(url, formData, User.class);
assertNotNull("⽤户ID", user.getId());
}
/*
@PutMapping(value = "/updateUser")
public void updateUser(User user,
@RequestParam(value = "multipartFile",required = false)
MultipartFile multipartFile) throws IOException
*/
@Test
public void testUpdateUser() {
String url = urlPrefix + "/updateUser";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>
();
formData.add("id", "6");
formData.add("name", "赵晓丽");
formData.add("password", "123456");
formData.add("sex", "true");
formData.add("birthDay", "2018-01-27");
formData.add("photo", "user1.png");
formData.add("email", "[email protected]");
//模拟⽂件上传
FileSystemResource fileSystemResource = new
FileSystemResource("/Users/admin/Desktop/head.png");
formData.add("multipartFile", fileSystemResource);
restTemplate.put(url, formData);
}
/*
@DeleteMapping(value = "/deleteUserByIds")
public void delteUserByIds(@RequestParam(value = "ids") Integer[] ids)
*/
@Test
public void testDeleteUser() {
String url = urlPrefix + "/deleteUserByIds?ids={id}";
Map<String, Object> parameters = new HashMap<String, Object>();
parameters.put("id", "4,5,6");
restTemplate.delete(url, parameters);
}
/*@GetMapping(value = "/queryUserByPage")
public List<User> queryUserByPage(@RequestParam(value = "page",defaultValue = "1")
Integer pageNow,
@RequestParam(value = "rows",defaultValue =
"10") Integer pageSize,
@RequestParam(value = "column",required = false)
String column,
@RequestParam(value = "value",required = false)
String value)
*/
@Test
public void testQueryUserByPage() {
String url = urlPrefix + "/queryUserByPage?page={page}&rows={rows}&column= { column }&value = {value} ";
Map<String, Object> params = new HashMap<String, Object>();
params.put("page", 1);
params.put("rows", 10);
params.put("column", "name");
params.put("value", "⼩");
User[] users = restTemplate.getForObject(url, User[].class, params);
assertNotNull("⽤户数据", users);
}
/*
@GetMapping(value = "/queryUserCount")
public Integer queryUserCount(@RequestParam(value = "column",required = false)
String column,
@RequestParam(value = "value",required = false)
String value)
*/
@Test
public void testQueryUserCount() {
String url = urlPrefix + "/queryUserCount?column={column}&value={value}";
//模拟表单数据
Map<String, Object> parameters = new HashMap<>();
parameters.put("column", "name");
parameters.put("value", "晓");
Integer count = restTemplate.getForObject(url, Integer.class, parameters);
assertNotNull(count);
}
/*
@GetMapping(value = "/queryUserById")
public User queryUserById(@RequestParam(value = "id") Integer id)
*/
@Test
public void testQueryUserById() {
String url = urlPrefix + "/queryUserById?id={id}";
//模拟表单数据
Map<String, Object> parameters = new HashMap<>();
parameters.put("id", "2");
User user = restTemplate.getForObject(url, User.class, parameters);
assertNotNull("⽤户ID", user.getId());
}
}
- RestUserControllerTests
/**
* RestUserController测试类
*/
@SpringBootTest(classes = {Application.class})
@RunWith(SpringRunner.class)
public class RestUserControllerTests {
@Resource
private RestTemplate restTemplate;
private String urlPrefix = "http://127.0.0.1:8888/restUserManager";
/*
@PostMapping(value = "/registerUser")
public User registerUser(@RequestPart(value = "user") User user,
@RequestParam(value = "multipartFile",required = false)
MultipartFile multipartFile) throws IOException {
*/
@Test
public void testRegisterUser() {
String url = urlPrefix + "/registerUser";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>
();
User user = new User(null, "张晓磊", true, "123456", new Date(), "aa.png", "[email protected]");
formData.add("user", user);
//模拟⽂件上传
FileSystemResource fileSystemResource = new
FileSystemResource("/Users/admin/Desktop/head.png");
formData.add("multipartFile", fileSystemResource);
User registerUser = restTemplate.postForObject(url, formData, User.class);
assertNotEquals("⽤户ID", registerUser.getId());
}
/*
@PostMapping(value = "/userLogin")
public User userLogin(@RequestBody User user)
*/
@Test
public void testUserLogin() {
String url = urlPrefix + "/userLogin";
//模拟表单数据
User user = new User();
user.setName("张晓磊");
user.setPassword("123456");
User loginUser = restTemplate.postForObject(url, user, User.class);
assertNotEquals("⽤户ID", loginUser.getId());
}
/*
@PostMapping(value = "/addUser")
public User addUser(@RequestPart(value = "user") User user,
@RequestParam(value = "multipartFile",required = false)
MultipartFile multipartFile) throws IOException
*/
@Test
public void testAddUser() {
String url = urlPrefix + "/addUser";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>
();
User user = new User(null, "温晓琪", true, "123456", new Date(), "aa.png", "[email protected]");
formData.add("user", user);
//模拟⽂件上传
FileSystemResource fileSystemResource = new
FileSystemResource("/Users/admin/Desktop/head.png");
formData.add("multipartFile", fileSystemResource);
User dbUser = restTemplate.postForObject(url, formData, User.class);
assertNotNull("⽤户ID", dbUser.getId());
}
/*
@PutMapping(value = "/updateUser")
public void updateUser(@RequestPart(value = "user") User user,
@RequestParam(value = "multipartFile",required = false)
MultipartFile multipartFile) throws IOException {
*/
@Test
public void testUpdateUser() {
String url = urlPrefix + "/updateUser";
//模拟表单数据
MultiValueMap<String, Object> formData = new LinkedMultiValueMap<String, Object>
();
User user = new User(20, "温晓琪", false, "123456", new
Date(), "aa.png", "[email protected]");
user.setId(9);
formData.add("user", user);
//模拟⽂件上传
FileSystemResource fileSystemResource = new
FileSystemResource("/Users/admin/Desktop/head.png");
formData.add("multipartFile", fileSystemResource);
restTemplate.put(url, formData);
}
/*
@DeleteMapping(value = "/deleteUserByIds")
public void delteUserByIds(@RequestParam(value = "ids") Integer[] ids)
*/
@Test
public void testDeleteUser() {
String url = urlPrefix + "/deleteUserByIds?ids={id}";
Map<String, Object> parameters = new HashMap<String, Object>();
parameters.put("id", "4,5,6");
restTemplate.delete(url, parameters);
}
/*
@GetMapping(value = "/queryUserByPage")
public List<User> queryUserByPage(@RequestParam(value = "page",defaultValue = "1")
Integer pageNow,
@RequestParam(value = "rows",defaultValue =
"10") Integer pageSize,
@RequestParam(value = "column",required = false)
String column,
@RequestParam(value = "value",required = false)
String value)
*/
@Test
public void testQueryUserByPage() {
String url = urlPrefix + "/queryUserByPage?page={page}&rows={rows}&column= { column }&value = {value} ";
Map<String, Object> params = new HashMap<String, Object>();
params.put("page", 1);
params.put("rows", 10);
params.put("column", "name");
params.put("value", "⼩");
User[] users = restTemplate.getForObject(url, User[].class, params);
assertNotNull("⽤户数据", users);
}
/*
@GetMapping(value = "/queryUserCount")
public Integer queryUserCount(@RequestParam(value = "column",required = false)
String column,
@RequestParam(value = "value",required = false)
String value)
*/
@Test
public void testQueryUserCount() {
String url = urlPrefix + "/queryUserCount?column={column}&value={value}";
//模拟表单数据
Map<String, Object> parameters = new HashMap<>();
parameters.put("column", "name");
parameters.put("value", "晓");
Integer count = restTemplate.getForObject(url, Integer.class, parameters);
assertNotNull(count);
}
/*
@GetMapping(value = "/queryUserById")
public User queryUserById(@RequestParam(value = "id") Integer id)
*/
@Test
public void testQueryUserById() {
String url = urlPrefix + "/queryUserById?id={id}";
//模拟表单数据
Map<String, Object> parameters = new HashMap<>();
parameters.put("id", "2");
User user = restTemplate.getForObject(url, User.class, parameters);
assertNotNull("⽤户ID", user.getId());
}
}
- Application
/**
* SpringBoot入口类
*/
@SpringBootApplication
@MapperScan(basePackages = "com.baizhi.dao")
public class Application extends SpringBootServletInitializer {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
public SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
return builder.sources(Application.class);
}
@Bean
public RedisTemplate redisTemplate() {
return new RedisTemplate();
}
@Bean
public RedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate redisTemplate = new RedisTemplate();
redisTemplate.setConnectionFactory(redisConnectionFactory);
redisTemplate.setValueSerializer(new GenericJackson2JsonRedisSerializer());
return redisTemplate;
}
}
集成Redis实现⼆级缓存
-
环境准备
虚拟机已安装Redis,并正常运行
- ApplicationContextHolder
/**
* 该接口为标记接口、Spring工厂在初始化的时候会自动注入applicationContext
*/
@Component
public class ApplicationContextHolder implements ApplicationContextAware {
private static ApplicationContext applicationContext = null;
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
public static Object getBean(String beanName) {
return applicationContext.getBean(beanName);
}
}
- UserDefineRedisCache
/**
* 集成Redis开启Mybatis二级缓存
*/
public class UserDefineRedisCache implements Cache {
private static final Logger LOGGER = LoggerFactory.getLogger(UserDefineRedisCache.class);
//记录的是Mapper的namespace
private String id;
private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private RedisTemplate redisTemplate = (RedisTemplate) ApplicationContextHolder.getBean("redisTemplate");
public UserDefineRedisCache(String id) {
this.id = id;
}
@Override
public String getId() {
return id;
}
@Override
public void putObject(Object key, Object value) {
LOGGER.debug("将查询结果缓存到Redis");
ValueOperations operations = redisTemplate.opsForValue();
operations.set(key, value, 30, TimeUnit.MINUTES);
}
@Override
public Object getObject(Object key) {
LOGGER.debug("获取缓存结果");
ValueOperations opsForValue = redisTemplate.opsForValue();
return opsForValue.get(key);
}
@Override
public Object removeObject(Object key) {
LOGGER.debug("删除Redis中的key:" + key);
ValueOperations opsForValue = redisTemplate.opsForValue();
Object value = opsForValue.get(key);
redisTemplate.delete(key);
return value;
}
@Override
public void clear() {
LOGGER.debug("删除所有Redis中的缓存");
redisTemplate.execute((RedisCallback) connection -> {
connection.flushDb();
return null;
});
}
@Override
public int getSize() {
return 0;
}
@Override
public ReadWriteLock getReadWriteLock() {
return readWriteLock;
}
}
实现Mysql的读写分离
背景
⼀个项⽬中数据库最基础同时也是最主流的是单机数据库,读写都在⼀个库中。当⽤户逐渐增多,单机数据库⽆法满⾜性能要求时,就会进⾏读写分离改造(适⽤于读多写少),写操作⼀个库,读操作多个库,通常会做⼀个数据库集群,开启主从备份,⼀主多从,以提⾼读取性能。当⽤户更多读写分离也⽆法满⾜时,就需要分布式数据库了-NoSQL。 正常情况下读写分离的实现,⾸先要做⼀个⼀主多从的数据库集群,同时还需要进⾏数据同步。
数据库主从搭建
Master配置
- 修改/etc/my.cnf
[mysqld]
server-id=1
log-bin=mysql-bin
log-slave-updates
slave-skip-errors=all
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
- 重启MySQL服务
- 登录MySQL主机查看状态
[[email protected] ~]# mysql -uroot -proot
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.6.42-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show master status\G;
*************************** 1. row ***************************
File: mysql-bin.000007
Position: 120
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)
ERROR:
No query specified
Slave 配置
- 修改/etc/my.cnf⽂件
[mysqld]
server-id=2
log-bin=mysql-bin
log-slave-updates
slave-skip-errors=all
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
- 重启MySQL服务
systemctl restart mysqld
- MySQL配置从机
[[email protected] ~]# mysql -u root -proot
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.73-log Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> change master to
-> master_host='192.168.157.196',
-> master_user='root',
-> master_password='root',
-> master_log_file='mysql-bin.000007',
-> master_log_pos=120;
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.157.196
Master_User: root
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000007
Read_Master_Log_Pos: 120
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 283
Relay_Master_Log_File: mysql-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 120
Relay_Log_Space: 457
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: 012f29df-4688-11ea-b41c-000c297c7433
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
ERROR:
No query specified
读写分离实现
读写分离要做的事情就是对于⼀条SQL该选择哪个数据库去执⾏,⾄于谁来做选择数据库这件事⼉,⽆⾮两个,要么中间件帮我们做,要么程序⾃⼰做。因此,⼀般来讲,读写分离有两种实现⽅式。第⼀种是依靠中间件(⽐如:MyCat),也就是说应⽤程序连接到中间件,中间件帮我们做SQL分离;第⼆种是应⽤程序⾃⼰去做分离。
编码思想
所谓的⼿写读写分离,需要⽤户⾃定义⼀个动态的数据源,该数据源可以根据当前上下⽂中调⽤⽅法是读或者是写⽅法决定返回主库的链接还是从库的链接。这⾥我们使⽤Spring提供的⼀个代理数据源AbstractRoutingDataSource接⼝。
该接⼝需要⽤户完善⼀个determineCurrentLookupKey抽象法,系统会根据这个抽象返回值决定使⽤系统中定义的数据源。
@Nullable
protected abstract Object determineCurrentLookupKey();
其次该类还有两个属性需要指定
defaultTargetDataSource
和
targetDataSources
,其中defaultTargetDataSource需要指定为Master数据源。targetDataSources是⼀个Map需要将所有的数据源添加到该Map中,以后系统会根据determineCurrentLookupKey⽅法的返回值作为key从targetDataSources查找相应的实际数据源。如果找不到则使⽤defaultTargetDataSource指定的数据源。
实现步骤
- 配置application.yml
# 访问接口配置
server:
port: 8989
servlet:
context-path: /UserModel
spring:
# 数据源配置---单机模式
# datasource:
# type: com.alibaba.druid.pool.DruidDataSource
# username: root
# password: root
# driver-class-name: com.mysql.jdbc.Driver
# url: jdbc:mysql://CentOS:3306/usermodel?useSSL=false&characterEncoding=UTF8&serverTimezone=GMT
# 数据源配置---集群模式
datasource:
# 主节点
master:
type: com.alibaba.druid.pool.DruidDataSource
username: root
password: root
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://CentOSA:3306/usermodel?useUnicode=true&characterEncoding=UTF8&serverTimezone=UTC&useSSL=false
# 从节点1
slave1:
type: com.alibaba.druid.pool.DruidDataSource
username: root
password: root
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://CentOSB:3306/usermodel?useUnicode=true&characterEncoding=UTF8&serverTimezone=UTC&useSSL=false
# 从节点2
slave2:
type: com.alibaba.druid.pool.DruidDataSource
username: root
password: root
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://CentOSC:3306/usermodel?useUnicode=true&characterEncoding=UTF8&serverTimezone=UTC&useSSL=false
# 上传文件控制
servlet:
multipart:
max-request-size: 50MB
max-file-size: 50MB
enabled: true
# 乱码解决
http:
encoding:
charset: utf-8
# 连接redis
redis:
host: CentOS
port: 6379
# Mybatis配置
mybatis:
mapper-locations: classpath:mapper/*.xml
type-aliases-package: com.baizhi.entity
# //在resource目录下建立config文件夹
config-location: classpath:config/mybatis.xml
# 开启Mybatis批处理模式
logging:
level:
root: info
- 配置数据源
/**
* 自定义数据源
*/
@Configuration
public class UserDefineDatasourceConfig {
/**
* 主节点
*
* @return
*/
@Bean
@ConfigurationProperties("spring.datasource.master")
public DataSource masterDatasource() {
return DataSourceBuilder.create().build();
}
/**
* 从节点1
*
* @return
*/
@Bean
@ConfigurationProperties("spring.datasource.slave1")
public DataSource slave1Datasource() {
return DataSourceBuilder.create().build();
}
/**
* 从节点2
*
* @return
*/
@Bean
@ConfigurationProperties("spring.datasource.slave2")
public DataSource slave2Datasource() {
return DataSourceBuilder.create().build();
}
/**
* 数据源配置
*
* @param masterDatasource
* @param slave1Datasource
* @param slave2Datasource
* @return
*/
@Bean
public DataSource proxyDataSource(@Qualifier("masterDatasource") DataSource masterDatasource,
@Qualifier("slave1Datasource") DataSource slave1Datasource,
@Qualifier("slave2Datasource") DataSource slave2Datasource
) {
DataSourceProxy proxy = new DataSourceProxy();
//设置默认数据源
proxy.setDefaultTargetDataSource(masterDatasource);
HashMap<Object, Object> map = new HashMap<>();
map.put("master", masterDatasource);
map.put("slave1", slave1Datasource);
map.put("slave2", slave2Datasource);
//注册所有数据源
proxy.setTargetDataSources(map);
return proxy;
}
/**
* 覆盖SqlSessionFactory自定义数据源
*
* @param dataSource
* @return
* @throws Exception
*/
@Bean
public SqlSessionFactory sqlSessionFactory(@Qualifier("proxyDataSource") DataSource dataSource) throws Exception {
SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
sqlSessionFactoryBean.setDataSource(dataSource);
sqlSessionFactoryBean.setTypeAliasesPackage("com.baizhi.entity");
sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath*:mapper/*.xml"));
SqlSessionFactory sqlSessionFactory = sqlSessionFactoryBean.getObject();
return sqlSessionFactory;
}
/**
* 覆盖SqlSessionTemplate,开启BATCH处理模式
*
* @param sqlSessionFactory
* @return
*/
@Bean
public SqlSessionTemplate sqlSessionTemplate(@Qualifier("sqlSessionFactory") SqlSessionFactory sqlSessionFactory) {
ExecutorType executorType = ExecutorType.BATCH;
if (executorType != null) {
return new SqlSessionTemplate(sqlSessionFactory, executorType);
} else {
return new SqlSessionTemplate(sqlSessionFactory);
}
}
/**
* 事务手动注入,否则事务不生效
*
* @param dataSource
* @return
*/
@Bean
public PlatformTransactionManager platformTransactionManager(@Qualifier("proxyDataSource") DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
}
- 配置切⾯
/**
* 自定义切面,负责读取slaveDB注解,并且在DBTypeContextHolder中设置读写类型
*/
@Aspect
@Order(0)
@Component
public class UserDefineDataSourceAOP {
private static final Logger LOGGER = LoggerFactory.getLogger(UserDefineDataSourceAOP.class);
/**
* 配置环绕切面,该切面的作用是设置当前上下文的读写类型
*
* @param pjp
* @return
*/
@Around("execution(* com.baizhi.service..*.*(..))")
public Object methodInterceptor(ProceedingJoinPoint pjp){
Object result = null;
try {
//获取当前的方法信息
MethodSignature methodSignature = (MethodSignature) pjp.getSignature();
Method method = methodSignature.getMethod();
//判断方法上是否存在注解@SlaveDB
boolean present = method.isAnnotationPresent(SlaveDB.class);
OperType operType=null;
if(!present){
operType=OperType.WRIRTE;
}else{
operType=OperType.READ;
}
OPTypeContextHolder.setOperType(operType);
LOGGER.debug("当前操作:"+operType);
result = pjp.proceed();
//清除线程变量
OPTypeContextHolder.clear();
} catch (Throwable throwable) {
throwable.printStackTrace();
}
return result;
}
}
- 动态数据源
/**
* 配置动态数据源
*/
public class DataSourceProxy extends AbstractRoutingDataSource {
private static final Logger logger = LoggerFactory.getLogger(DataSourceProxy.class);
private String masterDBKey = "master";
private List<String> slaveDBKeys = Arrays.asList("slave1", "slave2");
private static final AtomicInteger round = new AtomicInteger(0);
/**
* 需要在该方法中,判断当前用户的操作是读操作还是写操作
*
* @return
*/
@Override
protected Object determineCurrentLookupKey() {
String dbKey = null;
OperType operType = OPTypeContextHolder.getOperType();
if (operType.equals(OperType.WRIRTE)) {
dbKey = masterDBKey;
} else {
//轮询返回 0 1 2 3 4 5 6
int value = round.getAndIncrement();
if (value < 0) {
round.set(0);
}
Integer index = round.get() % slaveDBKeys.size();
dbKey = slaveDBKeys.get(index);
}
logger.debug("当前的DBkey:" + dbKey);
return dbKey;
}
}
- 读写类型
/**
* 定义操作类型
*/
public enum OperType {
WRIRTE,READ;
}
- 记录操作类型
/**
* 该类主要是用于存储,当前用户的操作类型,将当前的操作存储在当前线程的上下文中
*/
public class OPTypeContextHolder {
private static final ThreadLocal<OperType> OPER_TYPE_THREAD_LOCAL=new ThreadLocal<>();
public static void setOperType(OperType operType){
OPER_TYPE_THREAD_LOCAL.set(operType);
}
public static OperType getOperType(){
return OPER_TYPE_THREAD_LOCAL.get();
}
public static void clear(){
OPER_TYPE_THREAD_LOCAL.remove();
}
}
- 业务⽅法标记注解
/**
* 该注解用于标注,当前用户的调用方法是读还是写
*/
@Retention(RetentionPolicy.RUNTIME) //表示运行时解析注解
@Target(value = {ElementType.METHOD})//表示只能在方法上加
public @interface SlaveDB {
}
- logback.xml配置
<logger name="com.baizhi.datasource" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT" />
</logger>
- 代码结构
分布式文件系统集成
概述
分布式⽂件系统(Distributed File System)是指⽂件系统管理的物理存储资源不⼀定直接连接在本地节点上,⽽是通过计算机⽹络与节点相连。
计算机通过⽂件系统管理、存储数据,⽽信息爆炸时代中⼈们可以获取的数据成指数倍的增⻓,单纯通过增加硬盘个数来扩展计算机⽂件系统的存储容量的⽅式,在容量⼤⼩、容量增⻓速度、数据备份、数据安全等⽅⾯的表现都差强⼈意。分布式⽂件系统可以有效解决数据的存储和管理难题:将固定于某个地点的某个⽂件系统,扩展到任意多个地点/多个⽂件系统,众多的节点组成⼀个⽂件系统⽹络。每个节点可以分布在不同的地点,通过⽹络进⾏节点间的通信和数据传输。⼈们在使⽤分布式⽂件系统时,⽆需关⼼数据是存储在哪个节点上、或者是从哪个节点从获取的,只需要像使⽤本地⽂件系统⼀样管理和存储⽂件系统中的数据。
⽂件系统最初设计时,仅仅是为局域⽹内的本地数据服务的。⽽分布式⽂件系统将服务范围扩展到了整个⽹络。不仅改变了数据的存储和管理⽅式,也拥有了本地⽂件系统所⽆法具备的数据备份、数据安全等优点。判断⼀个分布式⽂件系统是否优秀,取决于以下三个因素:
数据的存储⽅式:例如有1000万个数据⽂件,可以在⼀个节点存储全部数据⽂件,在其他N个节点上每个节点存储1000/N万个数据⽂件作为备份;或者平均分配到N个节点上存储,每个节点上存储1000/N万个数据⽂件。⽆论采取何种存储⽅式,⽬的都是为了保证数据的存储安全⽅便获取。
数据的读取速率:包括响应⽤户读取数据⽂件的请求、定位数据⽂件所在的节点、读取实际硬盘中数据⽂件的时间、不同节点间的数据传输时间以及⼀部分处理器的处理时间等。各种因素决定了分布式⽂件系统的⽤户体验。即分布式⽂件系统中数据的读取速率不能与本地⽂件系统中数据的读取速率相差太⼤,否则在本地⽂件系统中打开⼀个⽂件需要2秒,⽽在分布式⽂件系统中各种因素的影响下⽤时超过10秒,就会严重影响⽤户的使⽤体验。
数据的安全机制:由于数据分散在各个节点中,必须要采取冗余、备份、镜像等⽅式保证节点出现故障的情况下,能够进⾏数据的恢复,确保数据安全。
⽂件系统分类
- 块存储:MongoDB数据库中的GridFS、Hadoop中的HDFS,这些系统在存储⽂件的的时候会尝试先将⽂件打碎存储(拆分成Data Block)。这样存储的优点可以存储超⼤型⽂件,更加⾼效的利⽤磁盘资源。但是需要额外存储⽂件碎⽚的元数据信息。
在块存储中HDFS存储的块128MB,但是在MongoDB中默认Chunk 255 KB,虽然都⽀持块存储但是应⽤场景有很⼤差异。HDFS使⽤于超⼤⽂本⽇志⽂件存储。但是MongoDB适合存储超⼤的流媒体⽂件例如操⼤的⾳频和视频,可以实现流媒体数据流的区间加载。
- ⽂件存储:GlusterFS、NFS、FastDFS等都是基于⽂件单位存储,这种存储并不会将⽂件系统打碎。⽽是⽂件存储到系统中的某⼀台服务器中。这样存储的优点可以应对⼀些⼩⽂件系统,系统维护简单,⽆需存储⽂件的元数据,系统设计和维护成本低。
FastDFS 介绍
特点
FastDFS 是⼀款开源的轻量级分布式⽂件系统如下特点:
- 纯粹C语⾔实现,⽀持Linux、FreeBSD等unix系统。
- 类似GoogleFS/HDFS,但是不是通⽤的⽂件系统,只能通过专有的API访问,⽬前提供了C、Java和PHPAPI 互联⽹量身定做,最求⾼性能,⾼扩展.
- FastDFS不仅仅可以存储⽂件,还可以存储⽂件的元数据信息(可选)。
架构
整个FastDFS架构中有Client、Tracker和Storage服务。其中Client⽤于提交⽂件给FastDFS集群。Storage 服务负责实际数据的存储,Tracker服务负责监控和调度Storage服务,起到负载均衡器的作⽤.
如果Storage的中
卷
是⼀样的也就意味着这些服务彼此数据相互备份,实现数据的冗余备份。不同
卷
存储整个集群中的部分⽂件,类似于传统单机⽂件系统的的分区概念(C盘、D盘、…)Tracker Server 主要做⼯作调度,在访问上起负载均衡的作⽤。在内存中记录集群中group/
卷
和Storage server的状态信息,是连接Client和Storage Server的枢纽。因为相关信息存储在内存中,所以Tracker Server性能⾮常⾼,⼀个较⼤的集群(上百个group/
卷
)中3台就够了。
StorageServer :存储服务器,⽂件和⽂件属性信息(meta数据)都存储在服务器的磁盘上。
上传、下载机制
- Storage Server会定期的向Tracker服务器汇报⾃身状态信息,例如健康状态和存储容量。
- Client连接Tracker Server发送⽂件请求。
- Tracker Server根据注册的Storage Server的信息返回⼀台可⽤的Storage Server的调⽤信息。
- Client拿到信息后直接连接对应的Storage Server进⾏点到点的⽂件上传(附加元数据-可选)。
- Storage Server收到⽂件请求后会根据⾃⼰位置信息⽣成File_ID信息,并且将File_ID和⽤户携带的元数据信息进⾏关联,然后将File_ID返回给Client。
- 返回的File_ID⽂件服务器并不会存储,需要Client端保留在外围数据库中,以后Client端可以通过File_ID下载对应的⽂件或者元数据。
- Storage Server会定期的向Tracker服务器汇报⾃身状态信息,例如健康状态和存储容量。
- Client连接Tracker Server携带File_ID参数发送⽂件下载请求。
- Tracker Server根据注册的Storage Server的信息返回⼀台可⽤的Storage Server的调⽤信息(主从服务器)。
- Client拿到信息后直接连接对应的Storage Server进⾏点到点的⽂件下载(读取元数据)。
- Storage Server收到⽂件请求后解析File_ID信息,读取本地的⽂件流,将数据写会Client端。
File_ID组成
File_ID是由Storage Server⽣成并返回给Client端。File_ID包含了组/ 卷 和⽂件路径。Storage Server可以直接根据该⽂件名定位到该⽂件。
FastDFS集群搭建
资源下载安装
进⼊ https://github.com/happyfish100 ⽹站下载FastDFS相关资源。建议使⽤⼩编给⼤家处理过的安装包。
环境准备
准备三台机器,保证Mysql集群正常运行
- 安装C语言环境
具体步骤
- 1.安装依赖包libfastcommon https://github.com/happyfish100/libfastcommon/archive/V1.0.35.tar.gz
[[email protected] ~]# tar -zxf V1.0.35.tar.gz
[[email protected] ~]# cd libfastcommon-1.0.35
[[email protected] libfastcommon-1.0.35]# ./make.sh
[[email protected] libfastcommon-1.0.35]# ./make.sh install
- 2.安装FastDFS https://github.com/happyfish100/fastdfs/archive/V5.11.tar.gz
[[email protected] ~]# yum install -y perl-devel
[[email protected] ~]# tar -zxf fastdfs-5.11.tar.gz
[[email protected] ~]# cd fastdfs-5.11
[[email protected] fastdfs-5.11]# ./make.sh
[[email protected] fastdfs-5.11]# ./make.sh install
提示:当软件安装结束后,默认FastDFS启动所需的配置⽂件放置在/etc/fdfs⽬录下。
[[email protected] ~]# yum install -y tree
[[email protected] ~]# tree /etc/fdfs/
/etc/fdfs/
├── client.conf.sample
├── storage.conf.sample
├── storage_ids.conf.sample
└── tracker.conf.sample
0 directories, 4 files
# 可运⾏脚本
[[email protected] ~]# ls -l /etc/init.d/fdfs_*
-rwxr-xr-x. 1 root root 961 Jun 30 20:22 /etc/init.d/fdfs_storaged
-rwxr-xr-x. 1 root root 963 Jun 30 20:22 /etc/init.d/fdfs_trackerd
# 执⾏程序
[[email protected] ~]# whereis fdfs_storaged fdfs_trackerd
fdfs_storaged: /usr/bin/fdfs_storaged
fdfs_trackerd: /usr/bin/fdfs_trackerd
配置服务
- 1.创建fdfs运⾏所需的数据⽬录
[[email protected] ~]# mkdir -p /data/fdfs/{tracker,storage/store01,storage/store02}
[[email protected] ~]# tree /data/
/data/
└── fdfs
├── storage
│ ├── store01
│ └── store02
└── tracker
5 directories, 0 files
- 2.创建启动所需的配置⽂件
[[email protected] ~]# cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf
[[email protected] ~]# cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf
[[email protected] ~]# cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
[[email protected] ~]# tree /etc/fdfs/
/etc/fdfs/
├── client.conf
├── client.conf.sample
├── storage.conf
├── storage.conf.sample
├── storage_ids.conf.sample
├── tracker.conf
└── tracker.conf.sample
0 directories, 7 files
- 3.配置Tracker Server
[[email protected] ~]# vim /etc/fdfs/tracker.conf
base_path=/data/fdfs/tracker
- 4.配置Storage Server
[[email protected] ~]# vim /etc/fdfs/storage.conf
group_name=group`[1,2,3]`
base_path=/data/fdfs/storage
store_path_count=2
store_path0=/data/fdfs/storage/store01
store_path1=/data/fdfs/storage/store02
tracker_server=CentOSA:22122
tracker_server=CentOSB:22122
tracker_server=CentOSC:22122
- 修改Client端
[[email protected] ~]# vim /etc/fdfs/client.conf
base_path=/tmp
tracker_server=CentOSA:22122
tracker_server=CentOSB:22122
tracker_server=CentOSC:22122
启动服务器
[[email protected] ~]# /etc/init.d/fdfs_trackerd start
Reloading systemd:[ 确定 ]
Starting fdfs_trackerd (via systemctl):[ 确定 ]
[[email protected] ~]# /etc/init.d/fdfs_storaged start
Starting fdfs_storaged (via systemctl):[ 确定 ]
[[email protected] ~]# ps -aux | grep fdfs
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
root 78950 0.0 0.1 144784 2040 ? Sl 21:06 0:00
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
root 79000 13.0 3.2 83520 67144 ? Sl 21:06 0:06
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf
root 79324 0.0 0.0 103320 884 pts/0 S+ 21:07 0:00 grep fdfs
[[email protected] ~]#
FastDFS Shell-运维
- 上传⽂件
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf t_employee
group2/M00/00/00/wKidw1522Y-AWFqVAAAC_y4JXtg0728036
- 下载
- 信息
[[email protected] ~]# fdfs_file_info /etc/fdfs/client.conf group2/M00/00/00/wKidw1522Y-AWFqVAAAC_y4JXtg0728036
source storage id: 0
source ip address: 192.168.157.195
file create timestamp: 2020-03-22 11:20:47
file size: 767
file crc32: 772366040 (0x2E095ED8)
- 删除⽂件
- ⽂件追加
[[email protected] ~]# clear
[[email protected] ~]# echo "hello" > 1.txt
[[email protected] ~]# echo "word" > 2.txt
[[email protected] ~]# fdfs_upload_appender /etc/fdfs/client.conf /root/1.txt
group2/M00/00/00/wKikgl0qX4KEep_PAAAAAIpZx2Y751.txt
[[email protected] ~]# fdfs_append_file /etc/fdfs/client.conf
group2/M00/00/00/wKikgl0qX4KEep_PAAAAAIpZx2Y751.txt 2.txt
[[email protected] ~]# fdfs_download_file /etc/fdfs/client.conf
group2/M00/00/00/wKikgl0qX4KEep_PAAAAAIpZx2Y751.txt
[[email protected] ~]# cat wKikgl0qX4KEep_PAAAAAIpZx2Y751.txt
hello
word
- 监视状态
- ⽂件校验和
[[email protected] ~]# fdfs_crc32 /etc/fdfs/client.conf
group2/M00/00/00/wKikgl0qX4KEep_PAAAAAIpZx2Y751.txt
2911662598
Nginx集成FastDFS
Nginx配置安装
- 下载fastdfs-nginx-module(不建议使⽤github上,因为编译有问题)
[[email protected] ~]# tar -zxf fastdfs-nginx-module.tar.gz
[[email protected] ~]# yum install -y pcre-devel
[[email protected] ~]# yum install -y openssl-devel
[[email protected] ~]# tar -zxf nginx-1.11.1.tar.gz
[[email protected] nginx-1.11.1]# ./configure --prefix=/usr/local/nginx-1.11.1/ --add-module=/root/fastdfs-nginx-module/src
[[email protected] nginx-1.11.1]# make
[[email protected] nginx-1.11.1]# make install
拷⻉配置
[[email protected] ~]# cp /root/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
[[email protected] ~]# cd /root/fastdfs-5.11/conf/
[[email protected] conf]# cp http.conf mime.types anti-steal.jpg /etc/fdfs/
配置nginx.conf
[[email protected] ~]# vim /usr/local/nginx-1.11.1/conf/nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location ~ /group[0-9]+/M00 {
root /data/fdfs/storage/store01;
ngx_fastdfs_module;
}
location ~ /group[0-9]+/M01 {
root /data/fdfs/storage/store02;
ngx_fastdfs_module;
}
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
- 修改mod_fastdfs.conf
[[email protected] ~]# vim /etc/fdfs/mod_fastdfs.conf
tracker_server=CentOSA:22122
tracker_server=CentOSB:22122
tracker_server=CentOSC:22122
group_name=group`[1,2,3]`
url_have_group_name = true
store_path_count=2
store_path0=/data/fdfs/storage/store01
store_path1=/data/fdfs/storage/store02
启动nginx
[[email protected] ~]# cd /usr/local/nginx-1.11.1/
[[email protected] nginx-1.11.1]# ./sbin/nginx -t
ngx_http_fastdfs_set pid=6207
ngx_http_fastdfs_set pid=6207
nginx: the configuration file /usr/local/nginx-1.11.1//conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx-1.11.1//conf/nginx.conf test is successful
[[email protected] nginx-1.11.1]# ./sbin/nginx nginx启动
测试下载
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf t_employee
group2/M01/00/00/wKidw1525qKAbNHkAAAC_y4JXtg9705075
随便访问⼀个nginx服务查看效果
http://CentOS[A|B|C]/group2/M01/00/00/wKidw1525qKAbNHkAAAC_y4JXtg9705075?filename=t_employee
⽤户在请求的时候,可以选择性添加⽂件名,⽤于修改下载的⽂件名
FastDHT⽂件去重
FastDFS除了提供了与nginx的集成,已提供了去重的⽂件解决⽅案。该解决⽅案FastDFS的作者yuqing也 在github上以FastDHT分⽀贡献出来了。
FastDHT is a high performance distributed hash table (DHT) which based key value pairs. It can store mass key value pairs such as filename mapping, session data and user related data.
安装
- 1.安装BerkeleyDB 下载db-4.7.25.tar.gz
[[email protected] ~]# tar -zxf db-4.7.25.tar.gz
[[email protected] ~]# cd db-4.7.25
[[email protected] db-4.7.25]# cd build_unix/
[[email protected] build_unix]# ./../dist/configure
[[email protected] build_unix]# make
[[email protected] build_unix]# make install
- 2.安装FastDHT
[[email protected] ~]# tar zxf FastDHT_v2.01.tar.gz
[[email protected] ~]# cd FastDHT
[[email protected] FastDHT]# ./make.sh
[[email protected] FastDHT]# ./make.sh install
安装结束后会在/etc⽬录下产⽣fdht⽂件夹
[[email protected] FastDHT]# tree /etc/fdht/
/etc/fdht/
├── fdht_client.conf
├── fdhtd.conf
└── fdht_servers.conf
- 3.修改fdhtd.conf
[[email protected] ~]# mkdir /data/fastdht
[[email protected] ~]# vim /etc/fdht/fdhtd.conf
base_path=/data/fastdht
- 4.修改fdht_servers.conf
[[email protected] ~]# vim /etc/fdht/fdht_servers.conf
group_count = 3
group0 = CentOSA:11411
group1 = CentOSB:11411
group2 = CentOSC:11411
- 5.修改fdht_client.conf配置⽂件
[[email protected] ~]# vim /etc/fdht/fdht_client.conf
base_path=/tmp/
- 6.启动FDHT服务
[[email protected] ~]# fdhtd /etc/fdht/fdhtd.conf start
[[email protected] ~]# ps -axu| grep fd
root 3381 0.0 0.0 276652 1660 ? Sl 11:14 0:01 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
root 3405 0.0 1.7 86396 67392 ? Sl 11:14 0:04 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf
root 34263 1.3 0.4 198712 17340 ? Sl 12:45 0:00 fdhtd /etc/fdht/fdhtd.conf start
root 34275 0.0 0.0 112728 972 pts/0 S+ 12:45 0:00 grep --color=auto fd
操作FastDHT服务
- 设置值
[[email protected] ~]# fdht_set /etc/fdht/fdht_client.conf lrh:user001 name='lrh',age=24;
This is FastDHT client test program v2.01
Copyright (C) 2008, Happy Fish / YuQing
FastDHT may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDHT source kit.
Please visit the FastDHT Home Page http://www.csource.org/
for more detail.
success set key count: 2, fail count: 0
- 读取值
[[email protected] ~]# fdht_get /etc/fdht/fdht_client.conf lrh:user001 name,age
This is FastDHT client test program v2.01
Copyright (C) 2008, Happy Fish / YuQing
FastDHT may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDHT source kit.
Please visit the FastDHT Home Page http://www.csource.org/
for more detail.
name=lrh
age=24
success get key count: 2, fail count: 0
- 删除值
[[email protected] ~]# fdht_delete /etc/fdht/fdht_client.conf lrh:user001 name;
This is FastDHT client test program v2.01
Copyright (C) 2008, Happy Fish / YuQing
FastDHT may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDHT source kit.
Please visit the FastDHT Home Page http://www.csource.org/
for more detail.
success delete keys: name
success delete key count: 1, fail count: 0
集成FastDHT
- 1.修改etc/fdfs/storage.conf配置⽂件
[[email protected] ~]# vim /etc/fdfs/storage.conf
check_file_duplicate=1
keep_alive=1
#include /etc/fdht/fdht_servers.conf
- 2.分别启动fdhtd服务、fastfs
[[email protected] usr]# /usr/local/bin/fdhtd /etc/fdht/fdhtd.conf restart
[[email protected] usr]# /etc/init.d/fdfs_trackerd restart
[[email protected] usr]# /etc/init.d/fdfs_storaged restart
- 上传⽂件测试
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf t_employee
group2/M00/00/00/wKidw1528uuAEQ5NAAAC_2opwzY9266001
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf t_employee
group2/M00/00/00/wKidw1528weAbL6mAAAC_wNkuWg9131724
[[email protected] ~]# ls -l /data/fdfs/storage/store01/data/00/00/
总用量 8
-rw-r--r--. 1 root root 767 3月 22 11:20 wKidw1522Y-AWFqVAAAC_y4JXtg0728036
lrwxrwxrwx. 1 root root 72 3月 22 13:09 wKidw1528uuAEQ5NAAAC_2opwzY9266001 -> /data/fdfs/storage/store01/data/00/00/wKidw1528uyAHyPOAAAC_y4JXtg8258231
-rw-r--r--. 1 root root 767 3月 22 13:09 wKidw1528uyAHyPOAAAC_y4JXtg8258231
lrwxrwxrwx. 1 root root 72 3月 22 13:09 wKidw1528weAbL6mAAAC_wNkuWg9131724 -> /data/fdfs/storage/store01/data/00/00/wKidw1528uyAHyPOAAAC_y4JXtg8258231
可以看出系统产⽣了wKikgl0qC4KAErBTAAAixXWAIyY133.log的两个链接
SpringBoot集成FastDFS
引⼊依赖
<dependency>
<groupId>com.github.tobato</groupId>
<artifactId>fastdfs-client</artifactId>
<version>1.26.6</version>
</dependency>
配置application.yml
#FastDFS配置
fdfs:
tracker-list: CentOSA:22122,CentOSB:22122,CentOSC:22122
# 配置默认缩略图
thumb-image.height: 80
thumb-image.width: 80
文件上传
@Autowired
private FastFileStorageClient fastFileStorageClient;
/**
* 添加用户
*
* @param user
* @param multipartFile
* @return
* @throws IOException
*/
@PostMapping(value = "/addUser")
public User addUser(User user, @RequestParam(value = "multipartFile", required = false) MultipartFile multipartFile) {
//上传照片
//获取文件原始名
String originalFilename = multipartFile.getOriginalFilename();
//获取后缀名
String substring = originalFilename.substring(originalFilename.lastIndexOf("."));
/*
SpringBoot 集成FastDFS
*/
try {
//获取文件的输入流
InputStream inputStream = multipartFile.getInputStream();
// 文件的输入流 上传文件的大小 后缀名 元数据
FastImageFile fastImageFile = new FastImageFile(inputStream, inputStream.available(), substring, new HashSet<MetaData>());
StorePath storePath = fastFileStorageClient.uploadFile(fastImageFile);
String fullPath = storePath.getFullPath();
//返回的path存入mysql
user.setPhoto(fullPath);
tUserService.saveUser(user);
//注册成功返回登陆页
return user;
} catch (Exception e) {
//注册失败从新注册
return user;
}
}
- 图片上传
FileInputStream inputStream = new FileInputStream("G:/素材资料/61850.png");
FastImageFile fastImageFile=new
FastImageFile(inputStream,inputStream.available(),"png",new HashSet<MetaData>(),new
ThumbImage(150,150));
StorePath storePath = fastFileStorageClient.uploadImage(fastImageFile);
System.out.println(storePath.getFullPath());
- 删除⽂件
/**
* 删除用户
*
* @param ids
*/
@DeleteMapping(value = "/deleteUserByIds")
public void delteUserByIds(@RequestParam(value = "ids") Integer[] ids) {
for (Integer id : ids) {
User user = tUserService.queryUserById(id);
fastFileStorageClient.deleteFile(user.getPhoto());
}
tUserService.deleteByUserIds(ids);
}
-附录:⽂件下载
ByteArrayOutputStream baos = fastFileStorageClient.downloadFile("group3",
"M00/00/00/wKjvgl0prTSAMjeTAL26hhzQmiQ959.png", new
DownloadCallback<ByteArrayOutputStream>() {
@Override
public ByteArrayOutputStream recv(InputStream ins) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
IOUtils.copy(ins, baos);
return baos;
}
});
IOUtils.copy(new ByteArrayInputStream(baos.toByteArray()),new FileOutputStream("G:/素材
资料/baby.png"))