Hacking Book | Free Online Hacking Learning

Home

website vulnerability scanning platform

Posted by chiappelli at 2020-02-29
all

Missing detailed description document

1: Directory structure description

Scanner: topmgr.py: background service, responsible for the management of scanning tasks topscan.py: the entry program of scanning process lib: missed scanning process codes scripts: rule script directory topweb: foreground interface

2: Introduction to the library used

Note: the following version is the version I use, which does not mean I have to

Background: python2.76 it is recommended to use the high-performance network library of gevent1.0 python, the web framework of django1.6.1 Python in Python 2. It needs to install the Greenlet mysqldb background database. I use mysqlpywin32 using services in windows, so I need the pywin32 package beutifulsoup4 crawler using bs4requests Easy to use HTTP request package (no need to install, under the thirdparty directory)

Introduction to the front use library: bootstrap interface is generally used. Bootstrap v3jquery / jsTree tree tree control uses jsTree

3: Database description

Manually create a new database. By default, Kehan can create all tables through Django, Python management.py syncdb

Table introduction: task table: task table base field: it is mainly used to deal with the problems of different sites with the same domain name. For example, the starting URL is http://betteryinzhixin.com/vul/vul.php. If the base is set to / vul /, only the / vul / directory progress field will be scanned. It is used to achieve the scanning progress. Every time the scanning rules are started, the rule ID will be added and the end will be added

URL table: crawler URL table result table: scan result table rule table: rule table priority: priority 1-10, the smaller the higher the detection