文章内容
2019/10/27 15:10:48,作 者: 黄兵
Python logging
最近在学习使用Python的logging组件,这个主键主要功能就是记录程序运行时产生的一些日志记录。
具体使用方法如下:
Set_Logging.py
#!/usr/bin/env python # -*- coding: UTF-8 -*- import logging class logging_config: @staticmethod def init_logging(logging_name): fh = logging.FileHandler(encoding='utf-8', mode='a', filename='/var/log/SMS_Receive_Service/IP_Filter.log', ) # set up logging to file - see previous section for more details logging.basicConfig(handlers=[fh], level=logging.DEBUG, format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s', datefmt='%m-%d %H:%M') # define a Handler which writes INFO messages or higher to the sys.stderr console = logging.StreamHandler() console.setLevel(logging.INFO) # set a format which is simpler for console use formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s') # tell the handler to use this format console.setFormatter(formatter) # add the handler to the root logger logging.getLogger('').addHandler(console) # Now, we can log to the root logger, or any other logger. First the root... logging.info('日志记录已启动!') # Now, define a couple of other loggers which might represent areas in your # application: return logging.getLogger(logging_name)
首先这里设置文件记录日志的方式,日志最低的级别是DUBUG,设置了一些日志记录的格式已经文件编码。
之后定义了控制台Handler 处理器,默认消息级别为INFO,之后设置了显示消息的格式,之后将控制台Handler 处理器加入到root logger。
程序调用,返回一个logging。
具体调用方式:
#!/usr/bin/env python # -*- coding: UTF-8 -*- from Set_Logging import logging_config class IP_Filter: def __init__(self): # 设置日志记录 logger_name = 'IP_Filter' self._logger_ip_filter = logging_config.init_logging(logger_name) def check_ip_as(self, ip, ip_context_info, redis_conn, mysql_conn): get_ip_as = ip_context_info['as'] with closing(mysql_conn) as conn_mysql: with closing(conn_mysql.cursor()) as cur: # SQL 插入语句 query_as_sql = "SELECT * FROM ip_as_num WHERE ip_as ='{}';".format(get_ip_as) # 执行SQL语句 try: cur.execute(query_as_sql) # 获取所有记录列表 results = cur.rowcount except (MySQLdb.Error, MySQLdb.Warning) as e: self._logger_ip_filter.error('执行as SQL命令查询的时候出现错误,具体错误内容: {error_message}'.format(error_message=e)) return False if results == 0: return False else: redis_conn.sadd('ip_as_black_list', ip)
由于日积月累导致单个日志文件很大,需要对日志文件进行切割,具体代码如下:
#!/usr/bin/env python # -*- coding: UTF-8 -*- import logging import logging.handlers class logging_config: @staticmethod def init_logging(logging_name): fh = logging.handlers.TimedRotatingFileHandler("/var/log/SMS_Receive_Service/IP_Filter.log", when='D', interval=1, backupCount=0) # fh = logging.FileHandler(encoding='utf-8', mode='a', filename='/var/log/SMS_Receive_Service/IP_Filter.log', ) fh.suffix = "%Y%m%d-%H%M.log" # set up logging to file - see previous section for more details logging.basicConfig(handlers=[fh], level=logging.DEBUG, format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s', datefmt='%m-%d %H:%M') # define a Handler which writes INFO messages or higher to the sys.stderr console = logging.StreamHandler() console.setLevel(logging.INFO) # set a format which is simpler for console use formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s') # tell the handler to use this format console.setFormatter(formatter) # add the handler to the root logger logging.getLogger('').addHandler(console) # Now, define a couple of other loggers which might represent areas in your # application: return logging.getLogger(logging_name)
上面太复杂了,给一个简单示例:
import logging import logging.handlers # logging初始化工作 logging.basicConfig() # nor的初始化工作 nor = logging.getLogger("nor") nor.setLevel(logging.INFO) # 添加TimedRotatingFileHandler到nor # 定义一个1分钟换一次log文件的handler filehandler = logging.handlers.TimedRotatingFileHandler( "logging_test2", 'M', 1, 0) # 设置后缀名称,跟strftime的格式一样 filehandler.suffix = "%Y%m%d-%H%M.log" nor.addHandler(filehandler)
一些参数解释:
TimedRotatingFileHandler
的构造函数定义如下(2.5版本API为例):
TimedRotatingFileHandler(filename [,when [,interval [,backupCount]]])
filename
是输出日志文件名的前缀
when
是一个字符串的定义如下:
“S”: Seconds “M”: Minutes “H”: Hours “D”: Days “W”: Week day (0=Monday) “midnight”: Roll over at midnight
interval 是指等待多少个单位when的时间后,Logger会自动重建文件,当然,这个文件的创建
取决于filename+suffix
,若这个文件跟之前的文件有重名,则会自动覆盖掉以前的文件,所以
有些情况suffix要定义的不能因为when而重复。
backupCount
是保留日志个数。默认的0是不会自动删除掉日志。若设10,则在文件的创建过程中
库会判断是否有超过这个10,若超过,则会从最先创建的开始删除。
参考资料:
1、日志操作手册
发表评论
评论列表
JustTes2020 on 2021-06-08 07:28:41 回复 有用(2)
test
游客R)TQ on 2022-12-03 07:39:02
e游客VD1W on 2022-12-03 08:07:31
eJustTes2020 on 2021-06-08 07:27:00 回复 有用(2)
test
游客H,Y9 on 2022-12-03 07:39:07
e游客Qcc2 on 2022-12-03 08:07:36
eJustTes2020 on 2021-06-08 07:26:37 回复 有用(2)
test
游客TKc4 on 2022-12-03 07:39:11
e游客Kdff on 2022-12-03 08:07:40
eJustTes2020 on 2021-06-08 07:26:13 回复 有用(2)
test
游客(N4u on 2022-12-03 07:39:16
e游客%qLM on 2022-12-03 08:07:43
eJustTes2020 on 2021-06-08 07:25:45 回复 有用(2)
test
e on 2022-12-03 08:07:28 回复 有用(0)
e