Recently1, 6G mobile communication technology has been actively researched and faces issues about security and privacy. The various technologies and applications to be used in 6G will bring new security issues. Among the technologies, CLA and UAV have a ripple effect that can cause severe damage to the entire network. In particular, in case of the UAV base station, there are various security points such as devices, software and communications to be considered. In this paper, the security use-cases and threats scenarios of both CLA and UAV in 6G environment are described. Initially, the major threats that can occur in each environment are described through system architecture and communication process. Afterwards, the security use-cases are described in consideration of the threats and characteristics of each environment respectively. Finally, the attacks and security use-cases of each environment are depicted as the respective scenarios.
This paper presents novel insights to feature extraction and stylization of character motion in the instantaneous frequency domain by proposing a method using the Hilbert-Huang transform (HHT). HHT decomposes human motion capture data in the frequency domain into several pseudo monochromatic signals, so-called intrinsic mode functions (IMFs). We propose an algorithm to reconstruct these IMFs and extract motion features automatically using the Fibonacci sequence in the link-dynamical structure of the human body. Our research revealed that these reconstructed motions could be mainly divided into three parts, a primary motion and a secondary motion, corresponding to the animation principles, and a basic motion consisting of posture and position. Our method help animators edit target motions by extracting and blending the primary or secondary motions extracted from a source motion. To demonstrate results, we applied our proposed method to general motions (jumping, punching, and walking motions) to achieve different stylizations.
Compared to traditional power grid systems, smart grid is capable of precisely and instantly detecting the segments of power grid breakdown, isolating the affected parts and segments, and rerouting transmission paths to unaffected segments to further provide a stable electricity transmission and distribution environment. However, electricity losses of smart grid systems could be caused by errors of electric meter, failure transmission line, or even illegal behaviors. To detect the criminal behaviors in a smart grid environment, an adaptive ensemble algorithm is presented in this paper, which is composed of long short-term memory, convolutional neuron network, and hybrid multi-head attention convolutional network. The genetic algorithm is also used to find out good hyperparameters for the voting mechanism (also called meta-learner) to enhance its accuracy. Experimental results show that the proposed algorithm can find better results than traditional classification algorithms for solving the electricity theft detection problem in terms of the precision-recall area under the curve (PR-AUC) and F1-score.
Several recent studies pointed out that an effective traffic signal/light control strategy will be able to mitigate the traffic congestion problem, and therefore variants of solutions have been presented for solving this optimization problem. The multi-agent reinforcement learning (MARL) is one of the promising methods because it can provide good traffic control strategies for such complex environments. However, because each agent on its intersection of most MARL-based algorithms has only partial information from the observations of its intersection, the traffic control plan based on such incomplete information of all agents may not always be useful for improving traffic of an entire city. To enhance the performance of the MARL in solving the traffic light control problem, we present an effective algorithm based on an effective communication protocol to share the information between agents of neighbor intersections to make an integrated traffic light control plan. Moreover, a two-step decision mechanism is presented in this study to further improve the performance of MARL for traffic light control. To evaluate the performance of the proposed algorithm, we compared it with several message-passing-based algorithms on the simulator of Simulation of Urban MObility (SUMO) in this study. The results show that the proposed algorithm is capable of finding better results than all the other message-passing-based algorithms compared in this study for traffic light control problems.
Growing 5G1 applications require a user plane that has a high forward throughput and low packets loss. To meet these requirements, 5G acceleration means emerge, and User Plane Function (UPF) even more. The state of the art UPF acceleration is divided to software acceleration and hardware offloading. Overcoming these limitations between them, we design coexistence architecture for UPF, named CeUPF. We design and describe CeUPF's rule, which enables the CeUPF to reuse existing 5G user plane software function and provides high-performance hardware forwarding. UPF's transmitting function is implemented by offloading to smart NIC and P4 switch. We evaluate and compare the performance of the architecture. Our results show that offloading to P4 switch is better. Comparing with benchmark, CeUPF's bandwidth is promoted 10-33 times, and throughput is promoted 2.12-2.67 times. Our work also presents an open source platform to validate CeUPF, which is consistent with UPF.
Developing a good convolutional neural network (CNN) architecture manually by trial and error is typically extremely time-consuming and requires a lot of effort. That is why several recent studies have attempted to develop ways to automatically construct a suitable CNN architecture. In this study, an effective neural architecture search (NAS) algorithm based on a novel metaheuristic algorithm, search economics (SE), is presented for CNN to improve the accuracy of image classification. The basic idea of the proposed algorithm is to use the "expected value" instead of the "objective value" to evaluate a set of searched solutions, i.e., neural architectures in this case. As such, the searches of the proposed algorithm will trend to high potential regions in the solution space. Simulation results show that the proposed algorithm outperforms genetic algorithm-based NAS algorithm in terms of the accuracy, especially for complex image classification problems.
A visible watermark removal network that combines partial and standard convolution is proposed in this paper. Existing deep learning-based watermark removal methods use a standard convolutional network over the watermarked image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the watermark pattern covered area (typically the mean value). When the watermark is almost opaque, this often leads to artifacts such as color discolorepancy and blurriness. Based on this, we propose a watermark removal network that uses partial convolution, where the convolution is masked and re-normalized to only be conditioned on valid pixels, and then use a dual-input branch network based on ordinary convolution for optimization, one of the input branches is connected to the partial convolution network, and the other accept the watermarked image directly. With this mechanism, different convolutions can be combined to remove watermarks with different transparency. The scheme is evaluated through a large watermark image dataset. Experimental results show that this method can effectively remove watermarks of different colors and transparency.
With the advance in the internet of things (IoT), wireless communication, and artificial intelligence, nowadays, people can enjoy the autonomous driving system. Meanwhile, security issues found in vehicular ad-hoc networks (VANETs) have attracted the attention of researchers from different disciplines in recent years. Since traditional rule-based algorithms are ineffective in detecting the misbehavior in VANETs, developing an effective algorithm to deal with this problem has become a promising research topic today. As such, an integrated algorithm for misbehavior detection systems (MDS) is presented in this paper. It is composed of convolutional neural network (CNN) and long short-term memory (LSTM) models to reconstruct the location information as well as a support vector machine (SVM) as a binary classification method to check whether a vehicle is compromised or not. The vehicular reference misbehavior (VeReMi) extension dataset is used to evaluate the performance of the proposed algorithm and all the other detection algorithms compared in this paper. Experimental results show that the proposed algorithm is capable of detecting 95.37% of the compromised vehicles. In terms of the F1 score, the proposed algorithm can provide better results than all the other detection algorithms compared in this study.
Developing an effective search algorithm to find out a good solution for the vehicle routing problem is an importance issue, for a powerful search strategy can also be used to make a better decision for complex problems in the real life. The multi-objective vehicle routing problem with time windows (MOVRPTW) is a famous routing problem that aims at minimizing both the number of vehicles and the traveling distance at the same time. This paper presents an effective metaheuristic algorithm for solving MOVRPTW based on a new search algorithm named search economics (SE) the key ideas of which are twofold: (1) to portray the solution space and (2) to figure out the regions in the solution space with higher potential to find out good solutions---both based on the solutions that have been searched so far. By building on these two distinguished features of SE, the proposed algorithm is capable of avoiding trapping in a local optimum during the early stage of the convergence process, thus making it possible to find good results in solving complex optimization problems. The experimental results indicate that the proposed algorithm performs better than other well-known metaheuristic algorithms in solving MOVRPTW.
The so-called neural architecture search (NAS) provides an alternative way to construct a "good neural architecture," which would normally outperform hand-made architectures, for solving complex problems without domain knowledge. However, a critical issue for most of the NAS techniques is in that it is computationally very expensive because several complete/partial training processes are involved in evaluating the goodness of a neural architecture during the process of NAS. To mitigate this problem for evaluating a single neural architecture found by the search algorithm of NAS, we present an efficient NAS in this study, called genetic algorithm and noise immunity for neural architecture search without training (GA-NINASWOT). The genetic algorithm (GA) in the proposed algorithm is used to search for high potential neural architectures while a modified scoring method based on the neural architecture search without training (NASWOT) is used to replace the training process of each neural architecture found by the GA for measuring its quality. To evaluate the performance of GA-NINASWOT, we compared it with several state-of-the-art NAS techniques, which include weight-sharing methods, non-weight-sharing methods, and NASWOT. Simulation results show that GA-NINASWOT outperforms all the other state-of-the-art weight-sharing methods and NASWOT compared in this study in terms of the accuracy and computational time. Moreover, GA-NINASWOT gives a result that is comparable to those found by the non-weight-sharing methods while reducing 99% of the search time.
In order to allow developers to implement operable code-level modification tasks based on user feedback directly, so as to achieve rapid and continuous app updates and releases. We propose an efficient automated approach named LCFCR, which leverages natural language processing and clustering algorithms to group user reviews. Then, it enriches the semantic information of each group. Further, by combining the textual information from both commit messages and source code, it automatically localizes potential change files. We have evaluated the LCFCR on 10 open source mobile apps. The experiments demonstrate that the proposed approach outperforms the state-of-the-art baseline work in terms of clustering and localization accuracy, and thus produces more reliable results.
At present, using a large number of unsafe IoT devices to form botnets and launching DDoS attacks have become one of the main threats to IoT security. However, most of the existing DDoS attack defense methods are limited to being implemented within a single controller management network domain. In fact, many IoT devices that attackers build botnets are usually located in the jurisdiction of different controllers. Therefore, a mechanism is needed for each domain to cooperate with each other and share information about the attack source, so as to realize the defense against DDoS attacks across multiple domains. Blockchain technology's decentralization and group trusted collaboration mechanism provide a new method for solving cross-domain DDoS attack defense. This article combines SDN and blockchain with IoT, and proposes a cross-domain collaborative DDoS defense scheme based on blockchain and SDN architecture. It allows multiple SDN-based domains to collaborate with each other and share attack source information in a decentralized manner, making collaborative DDoS attack defense across multiple domains more flexible. Experimental results show that the defense mechanism has flexibility, feasibility, and safety.
Ocean monitoring depends on the widely deployed oceanographic buoys and observation stations that integrate various types of ocean sensors. Those ocean sensors often work in harsh environments, so the data collected by the sensors are sometimes abnormal, which affects the accuracy of downstream applications, e.g., the ocean data assimilation and intelligent data mining. It is not realistic to conduct abnormal data detection (quality control) on huge amounts of increasing data only rely on human-based services. Hence, current research tendency is adopting AI models to realize automatic quality control, which largely depends on the accuracy of the AI model in ocean data modeling and prediction. Hence, in order to accurately model and predict the observation data, this paper explores and compares various sequential data modeling methods, which are widely adopted in time series data forecasting. We further proposes a Transformer model for accurate real-time prediction of those data. The experimental results show that our proposed method is better than the baselines both in single-step and multiple-step prediction experimental settings. As far as we know, this is the first work that applying transformer based model in ocean observation data intelligent analysis.
FAT32 file system is a common file system, especially in the USB devices, with good stability and compatibility. When the FAT file system, as an important carrier of data storage, is attacked, data in the file system can be easily stolen by an adversary. Even the deleted file data can be recovered using data recovery technology that is one of the hot topics of computer forensics. A scheme of traceless data deletion for Windows FAT32 file system is presented in this paper to delete a file completely. Experimental results show that the deleted files cannot recover and the deletion operation leaves no trace.
Due to the strong protection of anonymity, Darknet has been exploited by criminals to distribute harmful content and banned items, such as drugs, weapons, and malware, which are regarded as public hazard entities. The task of public hazard entity recognition can help to detect and analyze malicious activities in the Darknet. This paper focuses on the research of Chinese public hazard entity recognition in the field of illegal drugs. To evade detection and surveillance, Chinese public hazard entities in the Darknet usually utilize disguised forms, like homophones and multi-entities in a sentence, which makes it harder to identify them using traditional entity recognition methods. In this paper, we present an effective deep learning-based multi-information fusion model to identify Chinese public hazard entities in the Darknet. Specifically, we introduce the grammatical information by adding Pinyin and lexical features, and strengthen the semantic features by adding the word vectors from one advanced pre-trained language representation model. Then we combine these three parts with a classical sequence annotation architecture used in general named entity recognition to form our ultimate model. At last we construct a real dataset from drugs-related groups in the Darknet and conduct several experiments to evaluate our model. The experimental result verifies that our proposed model gains a good performance on the recognition of Darknet public hazard entities.
With the emergence of massive smart terminal devices, the network carries an explosive growth of data traffic. At the same time, the emerging real-time services have a high demand for network transmission delay. The contradiction between the limited network resources and higher delivery requirement derives from the paradigm-shifting of current data transmission. In this paper, we propose a network transmission paradigm with collaborative data transmission and computation processing based on in-network computing and Software-Defined Networking (SDN) to optimize the transmission process and analyze the transmission and computation processing process using image services as an example. By building a transmission delay model, we analyze that the delay can be reduced by using the network-computing collaboration mechanism and give a design of implementation and deployment. Finally, we discuss the problems and challenges according to the current development in the field of in-network computing.
With the rapid development of the Internet, webpages containing abused information such as pornography and gambling have emerged in an endless stream. These webpages are using various methods to evade traditional detection methods and which seriously make the Internet environment worse. Thus, how to accurately identify these webpages are becoming more and more significant. In response to this problem, by combining text recognition and text classification, this paper proposes an abused webpage detection method based on screenshots, which can efficiently detect and classify webpages by acquiring the user's real visible webpage information. Also, this paper uses the traditional web crawler method to conduct a comparative experiment, and the accuracy and the advantage of the method have been verified. This work will provide technical support for fighting against illegal activities and purifying the Internet environment.
According to the situation of pork price fluctuation in Shandong province in recent years, this paper collects relevant data for comprehensive analysis and research. In the analysis, the pork price fluctuation is taken as the breakthrough point, and the correlation factors are analyzed by using statistical methods and data mining methods to analyze the reasons for the fluctuation and extract the relevant characteristics. The pork price prediction method based on machine learning is proposed to forecast and warn the pork price. In the comprehensive framework for the research of pork price, it is found that the high correlation factors are the price of similar pork and the price of alternative products. The impact of the price changes of alternative products on the fluctuation of pork prices is longer than that of similar pork prices. On the basis of referring to the above prices, the prediction and warning model can be used to analyze and study the future pork price. The warning interval division of the test set for pork price volatility is consistent with actual volatility trend.
Soft-sensors are usually used for difficult-to-measure variables prediction in wastewater treatment. Nowadays, neural networks, especially the dynamic neural networks, are widely used to predict and monitor these variables. However, traditional training methods of dynamic neural network cannot fully consider the uncertainties of measurements and model, resulting in inaccurate estimations of parameters and deviation of prediction performance. Therefore, this paper proposes a novel adaptive multi-output soft-sensor. In the proposed method, the square root unscented Kalman filter (SR-UKF) is adopted to timely update the weights of neural network in each layer. Through this strategy, we are able to effectively improve the prediction accuracy of the standard neural networks. What's more, providing a new way to update the parameters online as the sequential property of Kalman filter. The proposed method is verified by a data set from University of California database (UCI database). The results illustrated that the proposed model can achieve better prediction accuracy compared with the traditional models.
With the development of electromagnetic simulation technology and the increasing demand for simulation, simulation verification based on numerical simulation has received extensive attention from various research fields at home and abroad. Solving the linear sparse matrix equation generated in the electromagnetic simulation process is the biggest bottleneck restricting the running time of the program. Parallel computing, as an effective means to improve the calculation speed and processing capacity of computer systems, can further expand the scale of problem solving and shorten the calculation time. Next, this paper studies the parallel solution method of large-scale sparse linear equations based on the Multifrontal method. We port our program to SW26010-pro and utilizing the powerful heterogeneous computing units of the new-generation Sunway supercomputer. After extensive experiments, the results show that the hotspot functions of the resulting sparse matrix equation gain an 81x speedup compared to the master version, the computation time of overall computational performance gain an 64x speedup.
As a new computing paradigm, Cloud-Edge collaborative computing has injected new vitality into edge computing. To make full use of the resources of Cloud Computing Server and Edge Computing Servers in the Cloud-Edge environment, and solve the problem of uneven load of edge servers in different regions. This paper introduces a multi-server cooperative offloading strategy for IoT systems. Technically, a Task Pre-Offloading Algorithm, named TPOA, is devised first to determine partial tasks offloading position in advance, reducing the range of devices participating in the game. Then a game theory multi-server task offloading algorithm, named GT-MSTO, is designed to perform offloading games between cloud servers and multiple edge servers to achieve the goal of providing an optimal offloading location for tasks. Simulation results validate that the TPOA and GT-MSTO can effectively obtain the minimum task delay.
The retrieval question-answering(Q&A) system based on Q&A library is a system that can retrieve the most similar question from Q&A library to get the correct answer. Classic approaches only use TF-IDF, BM25 and other algorithms to calculate the shallow correlation between the sentences in the input question and the sentences in the Q&A library, without fully considering the semantic information of the sentences. Recently, pre-trained language models have made remarkable achievements in many fields of natural language processing(NLP). In this work we apply the pre-trained language model in the medicine field to the text matching stage of medical question answering system. We also improve and propose a deep text matching model based on BERT, the Potential Topic extraction Medical Bert model(PT-McBERT). We conduct several experiments on the medical text matching dataset CHIP-STS, the results show that our model achieves improvements over the classic methods. Finally, we design a real-world Chinese medical question answering system and apply the optimal model to the question matching stage, which can greatly improve the matching effect of the system.
With the widespread development of cloud-native technologies, the security of cloud-native system has gradually attracted attention. Cloud-native uses PKI (Public Key Infrastructure) to provide an important guarantee for cloud-native network security, however the heavy certificate management gradually makes the PKI mechanism be a bottleneck for cloud-native systems. In constract with PKI, the certificateless public key mechanism has many advantages such as lightness, which is very suitable for authentication in cloud-native environment. However, before introducing the certificateless public key mechanism into the cloud-native environment, the first problem that must be solved is the identity management. This paper proposes a blockchain-based certificateless identity management (BCL-IM) mechanism for cloud-native environment, which uses blockchain as a trust endorsement for service identity and service public key distribution records, and ensures the timeliness of identity management by blockchain node notification. In addition, a communication disconnect mechanism is also proposed and the usage scenarios of this mechanism are analyzed to ensure the data security. Finally, we give security analysis and performance evaluation to show that our scheme satisfies the requirements for secure service-to-service communication in cloud-native environments based on the certificateless public key mechanism.
The graph structure and graph partitioning algorithms are widely used in many fields of scientific computing and artificial intelligence. With the development of big data, the magnitude of graph is increasing rapidly, which makes high performance graph partitioning library extremely important. In this paper, we proposed swMETIS, a high performance graph partitioning library on Sunway manycore architecture. To exploit more parallelism and performance on Sunway manycore architecture, we introduce the concept of hierarchical scheme and propose an instruction-level hierarchical scheme for the multilevel k-way partitioning algorithm. This scheme allows us to exploit two levels of parallelism, which developed more parallelism than traditional one-level parallel schemes. Moreover, we propose a dropping strategy with threshold and a continuous cycle layout which effectively balances the processor load, improved the data access efficiency and exactly offload the processor load to more powerful CPE array. Our experimental results show that swMETIS achieving 9.52x speedup on average over the state-of-the-art ParMETIS library.
The parallel programming model of Sunway supercomputer based on accelerated thread library plays an important role on the acceleration performance of master-slave core parallelism. At present, two thread-based programming libraries, OpenACC and Athread, are provided. Although OpenACC is convenient, its parallel efficiency is not high and it is disadvantageous for deep optimization. While Athread is flexible and easy to deep optimization, it has a huge workload compared with OpenACC. This paper is based on a three-tier program template in which the main program calls the master program and then the master program calls the slave program, and the Rust language is used for lexical and grammatical analysis. Through the above steps, a method that can automatically convert the source program into Athread-format code is proposed, and some useful optimization methods are also integrated, such as parameters passed by a structure, local static variables and slave-core partition parallelism. Finally, a prototype of conversion tool from Fortran and C code to Athread code is designed and implemented. This method can avoid the vast majority of errors in code writing and greatly improve the efficiency of many-core work for researchers.
As Internet technology develops at a fast pace, telecom fraud cases happen more frequently, resulting in more and more people suffering great losses. This phenomenon should be kept under control, or it would bring bigger hazards to our society. However, here comes the blockchain technology, which has been used in lots of other fields out of its fabulous characteristics such as reliability and security, especially after the popularization of bitcoin. The study is trying to raise a blockchain-based statistical library for telecom security fraud cases, applying the technology to the prevention of Internet fraud.
With the increasing intensity of marine fishing, the living space of marine organisms has become more and more serious. The establishment of marine pastures effectively solved the decrease in the number of fish caused by marine fishing. How to analyze underwater environmental data by means of artificial intelligence, monitor the condition of underwater fish in real time, and promote the healthy growth of marine organisms is an important topic of current research. This paper aims at the problems of low accuracy of multi-scale object detection and slow model inference. Therefore, a deep learning model combined CNN network and Attention mechanism is proposed in this paper, and adopted the inference optimization algorithm of multi-algorithm fusion. Finally realizes multi-object and multi-scale fast detection of fish images.
Despite been extensively explored, current techniques in sequential data modeling and prediction are generally designed for solving regression tasks in a batch learning setting, making them not only computationally inefficient but also poorly scalable in real-world applications, especially for real-time intelligent ocean data quality control (QC), where the data arrives sequentially and the QC should be conducted in real time. This paper investigates the online learning for ocean data streams by resolving two main challenges: (i) how to develop a deep learning model to capture the complex ocean data distribution that could evolve dynamically, namely tackling the 'concept drift' problem for non-stationary time series; (ii) how to develop a deep learning model that can dynamically adapt its structure from shallow to deep with the inflow of the data to overcome under-fitting problem, namely tackling the 'model selection' problem. To tackle these challenges, we propose one Evolutive Convolutional Neural Network (ECNN) that dynamically re-weighting the sub-structure of the model from data streams in a sequential or online learning fashion, by which the capacity scalability and sustainability are introduced into the model. The experiments on real ocean observation data verify the effectiveness of our model. As far as we know, it is the first work that introduce online deep learning techniques into ocean data prediction research.
Metaverse development has often focused on bettering virtual reality technologies due to benefits in establishing immersion in virtual environment1. Its dependency on VR technology brings limitations including the lack of high-quality graphics and a lack of mobility. The mobile platform is the most prevalent interface for users to gain VR experience. There is a big challenge that mobile VR must face: it has limited computing power, but VR requires high rendering speed and image quality. The traditional solution is to complete lighting and shading computation offline and bake the result to light map. Thus, only texture lookups are needed at runtime which allows very high rendering speed. The problem of the traditional baking is that they usually take seconds to minutes to finish. In this paper, we introduce a novel edge computing framework to reduce the baking time to less than a second. Our method uses reservoir-based spatiotemporal importance resampling (ReSTIR) to bake low resolution light maps at edge server. At mobile side, the light maps are enlarged using super resolution technique. We also use denoiser to make sure the baked image is noise-free. The experimental results show that our method can have near real-time performance for scene with average complexity. The bottleneck is the bandwidth of mobile device when performing super resolution. Once it gets improved, we can expect our entire edge computing framework becomes real-time.
Biochemical oxygen demand (BOD) is one of the principal indicators to evaluate wastewater effluent qualities. Establishing an effective prediction model is one of important way to monitor sewage water quality properly. However, due to the nonlinearity, time-varying and large delay of sewage treatment process, the traditional back-propagation neural network (BPNN) and Elman neural network (ElmanNN) models are often prone to fall into local optimization, then resulting in poor prediction accuracy when dealing with high-dimension and complex data structure. Therefore, this paper proposes a VIP-PSO-Elman model. In the proposed model, the partial least square (PLS) method is able to extract hidden information by variable importance projection (VIP) and then used for variable selection. Finally, the PSO algorithm is implemented to optimize Elman network connection weights and thresholds to achieve the global optimal solution. The proposed model was validated by a data set from University of California database (UCI). The results showed that the model has good performance in root mean square error (RMSE) and correlation coefficient (R).
At present, ER α is considered as an important target in the treatment of breast cancer. Compounds that antagonize ER α activity may be candidate drugs for breast cancer treatment. The bioactivity of the compound against ERa is measured by IC50 and plC50.
In part 2, we conducted variable selection for 729 molecular descriptors of 1974 compounds, and selected the top 20 molecular descriptors with the most significant influence on biological activity through Statistical Pre-processing Model Based on Mutual Information and GRA Model. In part 3, we used the selected molecular descriptors as independent variables and pIC50 as dependent variables. And then we established QSAR Model and predicted the pIC50 values of 50 compounds. In part 4, We established the Transformation Model of ERa Bioactivity Index, and transformed the predicted pIC50 value into IC50 value.
Computational chemistry, medical chemistry communities and computer communities are more interested in ADMET properties prediction, which has an important role in the drug development process. By means of machine learning prediction, drug screening can be greatly reduced by a lot of human and material resources. The Machine learning methods are a new approach used in the ADMET properties prediction process. In this work, five ADMET properties of the compounds were selected, namely Caco-2, CYP3A4, hERG, Human Oral Bioavailability and Micronucleus. The five properties are predicted by SVM, Boosting, Bagging, Decision Tree and other model algorithms. The result indicated that Adboost of Boosting and cubic SVM have the highest accuracy in the dataset, 96.5% and 95.5% respectively, which saves a lot of work for the selection and optimization of ADMET properties model algorithm.
Software-defined Internet of Things (SD-IoT) is a hot topic in the next generation of network technology. However, the diverse online services have led to large-scale service connections, which poses a challenge to the carrying capacity of currently access control mechanisms. Therefore, this paper proposes a lightweight access control mechanism located in the data plane based on the flexibility of programmable switches. First, we analyzed the massive connectivity and openness requirements of SD-IoT, and determined the key indicators for evaluating the access control performance. Second, we designed and implemented attribute-based access control on the data plane, and optimized the authentication period on the user side based on the network status. Third, we evaluated the proposed mechanism through reasonable experiments, and the results showed that our solution can achieve low-latency service authorization.
In this paper, we present a novel approach to identify Tor hidden service access activity with key sequences under Obfs4 scenario. By calculating the key cell signals occurred during Tor hidden service access process, we get the start index and window size of the key TCP package sequence of traffic. In order to verify the effectiveness of this method, we perform comprehensive analysis under nine scenarios of different Obfs4 transmission model. We find through experimental results that there is a TCP package sequence window, which has a great contribution to identifying Tor hidden service access traffic. Only use the key TCP sequence as input, we can achieve more than 90% accuracy as well as recall in nine scenarios.
In e-commerce scenario, the goal of a session-based recommendation system(SBRS) is to predict next item clicks of users, while the session is anonymous in the process. Graph Neural Network(GNN) has aroused widespread interest in the SBRS system. However, there are two important drawbacks of applying GNN for this task. First, classic GNN approaches rarely consider the time interval between the user clicking on two items as in real-world environment. This time interval is very important for accurately recommending the next user's interest. Secondly, for inter-session information, most GNN-based models consider n-hop neighbor nodes of a single node, which introduces more noise in the learn global information phase.In this paper, we propose a Item Transition Attentive Graph Neural Network model(ITGNN). This model learns how time interval and global frequency of item transition affect a local session. Extensive experiments conducted on two E-commerce datasets demonstrate the model to be more accurate in recommendation task.
With the development of Internet, various new applications and scenarios have made the cyberspace more complicated. Endless new applications and scenarios put forward deterministic Quality of Service (QoS) on existing networks. So as to solve the problem of deterministic QoS in existing networks, the IETF established the Deterministic Networking (DetNet) working group. IETF DetNet working group aimed at providing an answer to deterministic QoS requirement with support for deterministic worst-case delay and zero packet loss for DetNet flows. DetNet working group proposed resource allocation, explicit paths, and service protection mechanism to provide QoS requirements mentioned above.
In such a context, we proposed a routing mechanism based on active network telemetry with Programmable Protocol-independent Packet Processors (P4). Firstly, we designed a more simplified probe format and improved the probe path decision algorithm in end-to-end range. Thus, we proposed a service adaptive multi-constraint heuristic routing strategy to guarantee delay and bandwidth requirements for DetNet QoS with Segment Routing (SR). Our simulation results showed that our routing mechanism can guarantee worst-case delay for DetNet QoS.
As research on blockchain deepens, privacy protection technologies of blockchain continue to develop. However, the use of these privacy protection technologies may cause supervisory problems since tracing and censoring malicious users and their misbehaviors become difficult. In order to solve the supervisory problems, we propose a supervisory scheme based on the Linkable Spontaneous Anonymous Group Signature (LSAG) schemes. In this paper, we first proposed the overall supervisory scheme based on LSAG to achieve effective tracing of blockchain users. To prevent tracing power abuses, we then redesigned the key generation algorithm and its corresponding negotiative decryption algorithm in the scheme to allow decentralized traceability. Only when all supervisory nodes agree can a transaction be decrypted and linked to its sender. Experiments show that our scheme can effectively realize transaction accountability and also prevent the abuse of this supervisory power.
Address spoofing is a thorny problem encountered in the development of the Internet. The governance of address spoofing attacks includes tracking the location of the attacker in response to subsequent attacks. In this article, based on in-band telemetry, we propose a backtracking scheme for address spoofing flows, design a telemetry header for recording the forwarding path, and provide a fine-grained backtracking function embedded in the telemetry header. We analyze the throughput of each network from the telemetry data, and initially predict the source of address spoofing. We conducted experiments to test the feasibility and accuracy of telemetry traceability, and analyzed the overhead of the finegrained traceability mechanism. Experiments show that our method takes up low overhead costs while ensuring traceability and accuracy.
In order to support the dynamic perception neural network of underground space and the intelligent brain of the city, according to the characteristics of multi-source, multi-category, multidimensional and multi-quantity of geological data, this paper studies the large data storage system which integrates multi-source acquisition and converged storage and intelligent processing to solve the problems of wide range, long time, multidimensional source and diverse processing of underground space information. This system promotes the combination of sensor network, big data and other technologies with the urban underground space perception industry, realizes the digitalization and intellectualization of various underground space information, improves the planning, risk assessment and disaster prediction of underground space, and provides support for the comprehensive development and utilization of underground space.