Latest Articles

Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes/issues, but are citable by Digital Object Identifier (DOI).
Display Method:
Nuclear Science and Engineering
Simulation study of neutron source for bimodal imaging target system based on low energy high current cyclotron
Lu Lu, An Shizhong, Guan Fengping, Wei Sumin
, Available online  , doi: 10.11884/HPLPB202638.250168
Abstract:
Background
Gamma and thermal neutron imaging are important non-destructive testing methods, which are complementary in many aspects. The thermal neutron and gamma bimodal imaging can combine the advantages of both. Compared with single beam imaging, the bimodal imaging has the ability to identify different substances and the sensitivity to both nuclides and elements simultaneously.
Purpose
Utilizing the reaction between protons and target material producing neutrons and gamma together, based on the 18 MeV cyclotron accelerator being developed by the China Institute of Atomic Energy, this paper presents a design of a bimodal imaging neutron source by simulation.
Methods
Beryllium with a high (p, n) reaction cross-section is selected as the neutron target to generate neutrons. To obtain thermal neutrons with higher flux, polyethylene is used as the neutron moderator and reflector. By the different spatial distributions of thermal neutrons and gamma, these two types of radiation are separately extracted from different directions. Besides, by designing the neutron and gamma exits on polyethylene, high neutron flux and gamma beams are simultaneously obtained.
Results
After simulation optimization, the thermal neutron flux at the thermal neutron outlet can reach 1.78×1010 n/(cm2·s) , and the gamma dose at the gamma outlet can reach 2.23×104 rad/h.
Conclusions
This paper presents a design of a neutron source for thermal-neutron-gamma imaging based on the 18 MeV/1 mA cyclotron accelerator. The design efficiently extracts thermal neutron flux and gamma flux from a single target, implementing a single-target-dual-source configuration.
A rapid modeling method for Monte-Carlo particle transport simulation based on TIN under complex terrain
Wang Xuedong, Zhu Jinhui, Zuo Yinghong, Niu Shengli, Liu Li, Zhuo Jun
, Available online  , doi: 10.11884/HPLPB202638.250243
Abstract:
Background
The traditional Monte-Carlo (MC) method faces an inherent trade-off between geometric modeling accuracy and computational efficiency when addressing real-world irregular terrain modeling.
Purpose
This paper proposes a fast MC particle transport modeling method based on irregular triangular networks for complex terrains, addressing the technical challenge of achieving adaptive and efficient MC modeling under high-resolution complex terrain scenarios.
Methods
The methodology consists of three key phases: First, high-resolution raster-format terrain elevation data are processed through two-dimensional wavelet transformation to precisely identify abrupt terrain variations and extract significant elevation points. Subsequently, the Delaunay triangulation algorithm is employed to construct TIN-structured terrain models from discrete point sets. Finally, the MCNP code’s “arbitrary polyhedron” macrobody definition is leveraged to establish geometric planes, with Boolean operations applied to synthesize intricate geometric entities, thereby realizing rapid automated MC modeling for high-resolution complex terrains.
Results
The results demonstrate that the proposed method accurately reproduces terrain-induced effects on radiation transport, achieving high-fidelity simulations while significantly compressing the number of cells and enhancing computational efficiency.
Conclusions
This methodology represents a novel approach for large-scale radiation field modeling under complex terrain constraints, demonstrating broad applicability to MC particle transport simulations in arbitrary large-scale complex terrain scenarios.
Numerical simulation of LARCH software based on unified energy grid method
Luo Shijie, Cai Li, Yang Junwu, Lu Haoliang, Chen Jun, Li Jinggang, Yu Chao, Wang Ting
, Available online  , doi: 10.11884/HPLPB202638.250219
Abstract:
Background
With the continuous development of nuclear power technology, reactor design has put forward higher requirements for the accuracy, efficiency and multi-functionality of nuclear computing software. The current mainstream Monte Carlo software has deficiencies in the balance between reactor radiation shielding design and nuclear design calibration, which restricts the critical simulation efficiency of the reactor core. Therefore, CNPRI has specifically developed the 3D Monte Carlo software LARCH 1.0 to meet the actual needs of nuclear power engineering design.
Purpose
This study aims to optimize the particle energy search mechanism in Monte Carlo simulation and address the pain point of low efficiency in traditional search methods; thereby based on the optimized search method, the delta-tracking algorithm is further improved to enhance the efficiency of core critical calculation and provide efficient and accurate calculation support for reactor design.
Method
During the development of the LARCH software, the core technological innovation lies in the adoption of a unified energy grid design to replace the traditional binary search and logarithmic search methods. Through the standardization and unification of the energy grid, the number of searches in the particle energy matching process is reduced, and the time consumption of a single search is shortened. Based on the unified energy grid technology, we further developed and optimized the delta-tracking algorithm to achieve the improvement of computing efficiency. By designing a targeted numerical verification scheme, the LARCH 1.0 software and the traditional Monte-Carlo software were compared and tested in reactor problem simulations.
Results
The optimized technical solution has achieved remarkable results. The search method based on the unified energy grid has significantly reduced the time cost of particle energy search compared with the traditional method. Based on this, the optimized delta-tracking algorithm has increased the critical computing efficiency of the Monte-Carlo software core by approximately 25%.
Conclusions
The unified energy grid method and the optimized delta-tracking algorithm adopted by the LARCH 1.0 3D Monte-Carlo software provide an effective technical path for the efficiency improvement of the Monte Carlo software and significantly enhance the critical calculation efficiency of the reactor core. The application potential of this software indicates that it can provide more efficient and reliable numerical simulation tools for reactor design. More extensive engineering verification and functional iterations will be further carried out subsequently.
BNCT dosimetric study of head tumor cases based on Monte-Carlo methods
Peng Heyu, Zheng Qi, Wang Wei, He Qingming, Cao Liangzhi, Zu Tiejun, Wang Yongping
, Available online  , doi: 10.11884/HPLPB202638.250291
Abstract:
Background
Boron neutron capture therapy (BNCT) is an innovative binary targeted cancer treatment technology with high relative biological effect and cell-scale precision; however, its clinical application is limited by the long computation time of traditional Monte-Carlo methods for dose calculation and the lack of sufficient dosimetric research on head tumors.
Purpose
This study aims to address these challenges by optimizing the Monte-Carlo algorithm and developing pre-processing/post-processing modules, verifying the accuracy of the computational system, and analyzing the dosimetric characteristics of BNCT for head tumors.
Methods
Based on NECP-MCX, three acceleration strategies, voxel geometry fast tracking, transport-counting integration, and MPI parallel optimization, were adopted to improve computational efficiency. Pre-processing (DICOM image parsing, material-boron concentration mapping, 3D voxel modeling) and post-processing (dose-depth curve, Dose-Volume Histogram (DVH), dose distribution map) modules were developed. Both NECP-MCX and MCNP were used to calculate the dose distribution of a head tumor case (RADCURE-700) for comparison.
Results
The single-dose calculation time was reduced from 2 h to 9.4 min. The dose curves, DVH, and dose distribution maps from the two programs showed good consistency with relative deviations below 5% within a depth of 10 cm. The resulting BNCT treatment plan achieved a tumor target volume D90 of 60 Gy in 63 min , with healthy tissue dose below 12.5 Gy.
Conclusions
The optimized NECP-MCX system realizes efficient and accurate dose calculation for BNCT. The consistent results validate its reliability, and the dosimetric analysis demonstrates the potential of BNCT for head tumor treatment, providing methodological support for clinical treatment planning.
Development and validation of a nuclear data adjustment module based on sensitivity analysis
Zou Xiaoyang, Liang Liang, Xu Jialong
, Available online  , doi: 10.11884/HPLPB202638.250234
Abstract:
Background
With the development of neutron calculation methods and improved modeling capabilities, the errors introduced by model approximations and discretization methods in nuclear reactor physics calculations have gradually decreased. However, nuclear data, due to the challenges in measurement, have become the key input parameter affecting computational accuracy.
Purpose
In this study, a nuclear data adjustment module based on sensitivity analysis and the generalized linear least squares algorithm was developed within the self-developed sensitivity and uncertainty analysis platform, SUPES.
Methods
First, sensitivity analysis was used to determine the relationship between system responses and input parameter variations. Next, similarity analysis was applied to select experimental setups with high similarity at the neutron physics level. Finally, the generalized linear least squares algorithm was employed to minimize the error between computed and measured values, resulting in nuclear data adjustments.
Results
The adjustment of the ACE format continuous energy database was performed on 22 cases from the critical benchmark HEU-MET-FAST-078. The numerical results show that the root mean square error of the effective multiplication factor (keff) was reduced from 3.10×10−3 to 1.53×10−3.
Conclusions
The comparison and analysis verified the correctness of the developed nuclear data adjustment module.
Development and performance test of a high resolution extreme ultraviolet spectroscopy system
Chen Yong, Yang Lei, Lu Feng, Wang Shaoyi, Yang Zuhua, Fan Quanping, Wei Lai
, Available online  , doi: 10.11884/HPLPB202638.250393
Abstract:
Background
The retention and diffusion of helium on the surface of the first wall is one of the key problems in the study of magnetic confinement fusion. Laser-induced breakdown spectroscopy is the most promising technique for in-situ diagnosis of the first wall. Compared with the optical spectral range, laser-induced extreme ultraviolet spectroscopy has more advantages in sensitivity, noise suppression and accuracy.
Purpose
To meet the requirement for high precision on-site measurement of helium impurity lines in magnetic confinement fusion, a ultra-high resolution EUV spectroscopy system was developed.
Methods
The grazing incidence Czerny-Turner structure was used in the spectrometer, and the luminous flux and spectral resolution were adjusted through an adjustable incidence slit. The ray tracing simulation was carried out using a self-developed optical design software. And the wavelength calibration and performance testing were carried out by microwave plasma light source.
Results
The simulation results show that the spectral resolution is better than 20 000, and the experimental results indicate that the spectrometer achieves a spectral resolution of 0.001 4 nm at He II (30.3786 nm).
Conclusions
The spectrometer can meet the requirement for high-precision measurement of helium extreme ultraviolet spectral lines, and it is expected to provide an important theoretical support for the research on the helium retention and diffusion in the first wall.
Advanced Interdisciplinary Science
A nanosecond large-spot laser measurement system based on multi-channel peak-hold circuit
Li Guochao, Shu Jun, Liu Kefu, Zhao Hui, Qiu Jian
, Available online  , doi: 10.11884/HPLPB202638.250330
Abstract:
Background
With the continuous advancement of photoelectric applications such as LiDAR, three-dimensional sensing, and free-space communication towards longer distances, larger fields of view, and higher precision, large-spot, nanosecond-pulse lasers are progressively emerging as a critical type of light source, owing to their advantages in far-field uniform illumination and weak signal detection.
Purpose
To address the challenges of amplitude distortion and sampling difficulties in beam quality measurements of large-spot, nanosecond-pulse lasers caused by optical path shaping distortions, transient capture limitations, and coherence requirements, this paper proposes a beam quality measurement system tailored for nanosecond pulsed large-aperture lasers.
Methods
The system employs a three-dimensional stepping platform combined with a photodetector to reconstruct the spatial intensity distribution of the beam, and incorporates a multi-channel peak-hold circuit to accurately latch pulse peaks, thereby ensuring transient fidelity in amplitude acquisition. To mitigate non-ideal conditions such as partial beam truncation and incomplete boundaries, a circle-fitting method is introduced as a complement to the second-moment calculation of energy, enhancing the robustness of beam size evaluation.
Results
Experiments employing a typical vertical-cavity surface-emitting laser (VCSEL) were conducted through multi-position 3D axial scanning, comparing the consistency of beam size and energy distribution measured by different methods.
Conclusions
The results verify the measurement reliability and applicability of the proposed system under large-spot, nanosecond-pulse conditions, offering an effective means for laser beam quality assessment in related applications.
Feasibility study on neutron multiplicity counting method based on neural network
Feng Yuanwei, Zheng Yulai, Li Yong, Liu Chao, Zhang Lianjun, Huang Zhe, Guo Wenhui
, Available online  , doi: 10.11884/HPLPB202638.250245
Abstract:
Background
Neutron multiplicity measurement technology, as a core method in the field of non-destructive testing, plays a critical role in determining the mass of fissionable material (235U). However, it faces technical bottlenecks such as prolonged measurement cycles and measurement deviations under non-ideal conditions.
Purpose
This paper aims to explore feasible pathways for integrating neutron multiplicity measurement methods with neural network technology. The goal is to provide new research perspectives for advancing neutron multiplicity measurement technology toward greater efficiency and intelligence.
Methods
Leveraging Geant4 and MATLAB software, an Active Well Coincidence Counter (AWCC) simulation model was constructed to achieve high-precision simulation of the entire active neutron multiplicity measurement process. Building upon this, three neural networks—Backpropagation Neural Network (BPNN), Convolutional Neural Network (CNN), and Long Short-Term Memory network (LSTM)—were developed using the PyTorch framework to analyze and investigate neutron multiplicity distribution data.
Results
Compared with traditional calculation methods based on the active neutron multiplicity equation, neural network models represented by CNN and LSTM demonstrate significant advantages in measurement accuracy and efficiency. Specifically, in terms of relative error metrics, neural network models can reduce errors to lower levels; in the time dimension of measurement, they substantially shorten data processing cycles, effectively overcoming the timeliness constraints inherent to traditional approaches.
Conclusions
This achievement fully validates the theoretical feasibility and technical superiority of the neural network-based neutron multiplicity measurement approach, providing a novel solution for advancing neutron multiplicity detection toward greater efficiency and intelligence. Subsequent work will enhance the adaptability and noise resistance of neural network models for complex data by increasing simulation scenario complexity and introducing diversified factors such as noise interference and geometric variations. Meanwhile, building upon simulation studies, physical experimental validation will be conducted using AWCC instrumentation to drive the transition of neural network-based neutron multiplicity measurement technology from simulation to engineering application.
Particle Beams and Accelerator Technology
Research progress in the generation and applications of high-flux neutron sources driven by high-power laser facilities
He Shukai, Cui Bo, Qi Wei, Hong Wei, Deng Zhigang, Yan Yonghong, Zhang Bo, Li Jingjing, Zhou Kainan, Chen Zhongjing, Zhou Weimin, Zhao Zongqing, Gu Yuqiu
, Available online  , doi: 10.11884/HPLPB202638.250386
Abstract:
This paper briefly reviews the series of works carried out by the research team from the Laser Fusion Research Center, China Academy of Engineering Physics, based on the Xingguang-III and Shenguang-II Upgrade laser facilities, in the field of laser-driven neutron source generation and applications. In terms of generation mechanisms, it highlights explorations of several technical approaches, including enhancing photo-nuclear neutron production efficiency through novel target design, increasing neutron yield based on the target normal sheath acceleration mechanism, and obtaining high-quality neutron sources via collisionless electrostatic shock acceleration. On the application front, preliminary experimental studies have been conducted in areas such as fast neutron radiography, material radiation effects, and nuclear material detection, demonstrating the potential application value of such neutron sources as short-pulse, high-flux sources. With continuous advancements in laser technology and ongoing optimization of generation mechanisms, this new type of neutron source is expected to play an increasingly important role in basic scientific research, nuclear energy technology development, and industrial applications, providing new research tools and technical support for the development of related disciplines.
Pulsed Power Technology
Experimental investigation on multi-channel discharge formation in self-breakdown switch for 10 MA pulsed power device
Ji Ce, Li Feng, Ren Ji, Jiang Jihao, Li Yong, Cai Potao, Zhang Haoyu, Xu Zixing
, Available online  , doi: 10.11884/HPLPB202638.250351
Abstract:
Background
Water-dielectric self-breakdown switches are critical components in pulsed power devices such as the 10 MA facility. The plate-sphere electrode structure is specifically designed to achieve simultaneous multi-channel discharge, which is essential for minimizing switch inductance and reducing timing jitter.
Purpose
This study investigates the factors affecting multi-channel formation in a water-dielectric, three-electrode plate-sphere self-breakdown switch operating at 3 MV, with the aim of validating the theoretical formation criterion.
Methods
Theoretical analysis was conducted based on the specific parameters of the switch structure, focusing on key temporal characteristics influencing discharge behavior. Experimental validation was performed at the nominal breakdown voltage of 3 MV, utilizing diagnostic techniques to observe the development of discharge arcs across all electrode pairs.
Results
The calculated characteristic value for multi-channel formation was determined to be 8.6 ns, exceeding twice the measured switch jitter time of 3 ns, thereby satisfying the theoretical criterion. Observations confirmed that discharge arcs initiated nearly synchronously at the three sphere electrodes and propagated toward the plate electrodes, with complete multi-channel formation achieved within approximately 30 ns.
Conclusions
The study validates the criterion for multi-channel discharge in the plate-sphere switch structure. The design effectively enables simultaneous formation of multiple discharge channels within tens of nanoseconds, meeting essential requirements for high-performance pulsed power devices and contributing to improved operational stability.
Numerical simulation on the voltage efficiency factors of the spiral generator
Gao Mingzhu, Su Jiancang, Shang Wei, Qiu Xudong, Li Rui, Liu Shifei, Yan Wenlong, Zhang Haoran, Liu Zhi
, Available online  , doi: 10.11884/HPLPB202638.250327
Abstract:
Background
In the voltage multiplication process of a spiral generator based on the principle of vector inversion, its voltage efficiency is constrained by losses such as switching loss, transmission line loss and leakage inductance loss.
Purpose
To quantitatively investigate the impact of key design parameters––including coil turn number n, dielectric/electrode thickness, average dielectric diameter D, magnetic core permeability, and switch position on leakage loss and overall efficiency.
Methods
This study employs a field-circuit collaborative simulation method for modeling and analysis.
Results
The simulation results demonstrate that utilizing a high-permeability magnetic core can significantly enhance voltage efficiency; increasing D/n ratio improves output efficiency; while a higher turn number n boosts output voltage amplitude, it concurrently reduces voltage efficiency; enlarging the average diameter D enhances voltage efficiency but at the cost of increased device volume; reducing dielectric thickness benefits efficiency, though excessively thin layers risk insulation breakdown; and positioning the switch at the middle of the coil, rather than at the end, substantially increases voltage efficiency.
Conclusions
Furthermore, an in-depth analysis of the electromagnetic energy conversion process after switch closure reveals that a high-efficiency spiral generator must achieve complete conversion of magnetic energy into electric field energy while ensuring the electric fields in the active and passive layers are oriented in the same direction, which is essential for optimal performance.
Magnetic core reset method of high repetition high voltage pulse induction acceleration cavity
Huang Ziping, Chen Yi, Lü Lu
, Available online  , doi: 10.11884/HPLPB202638.250363
Abstract:
Background
In recent years, emerging application fields such as FLASH radiotherapy and flash radiography have created an urgent demand for high-repetition-rate linear induction accelerators (LIA) capable of operating at kHz-level frequencies. Whether the magnetic cores of induction accelerator cavities can effectively reset between repetitive pulses has become one of the critical factors determining the feasibility of high-repetition-rate LIA.
Purpose
This paper focuses on the reset methods for magnetic cores in high-repetition-rate pulsed induction accelerator cavities.
Methods
Through high-voltage experiments and circuit simulations, various rapid reset methods for both amorphous and nanocrystalline magnetic cores were investigated and comparatively analyzed. Based on this work, experimental tests were conducted on the interpulse reset effectiveness of accelerator cavity cores using self-developed high-repetition-rate pulsed induction accelerator modules.
Results
Research results indicate that nanocrystalline magnetic cores are more suitable for high-repetition-rate induction accelerator cavities. Different reset methods can achieve magnetic core reset at varying repetition frequencies.
Conclusions
Utilizing the inductor-isolated DC reset method, the existing device configuration can meet the reset requirements for nanocrystalline magnetic cores at a 10 kHz repetition rate. By leveraging the self-recovery capability of low-remanence nanocrystalline magnetic cores, automatic reset of accelerator cavity cores can be achieved at 100 kHz repetition rates.
Analysis of influencing factors on outlet velocity of multi-stage synchronous induction coil gun
Tang Jing, Ding Chenghan, Hao Guanyu, Lin Fuchang, Zhang Qin
, Available online  , doi: 10.11884/HPLPB202638.250337
Abstract:
Background
As an important branch of electromagnetic launch, multi-stage synchronous induction coil gun has become one of the hotspots of launch research because of its non-contact, linear propulsion and high efficiency. Among them, the armature outlet velocity is an important index, which is affected by many factors such as the structural parameters, material properties and coil circuit parameters. However, the existing research lacks theoretical analysis on various factors.
Purpose
The purpose of this paper is to analyze theoretical approaches for improving the armature outlet velocity, and to explore the factors affecting it.
Methods
Based on the equivalent circuit model, this paper derives the analytical formula of armature induced eddy current and investigates these factors affecting the outlet velocity via finite element simulation.
Results
Theoretical analysis shows that reducing the total inductance of the coil-armature equivalent circuit can increase the armature outlet velocity. Simulation results show that under the same initial electrical energy, reducing the number of turns of coils, reducing the cross-sectional shape factor of the rectangular wire, increasing the thickness and length of armature, and reducing the line inductance can improve the armature outlet velocity. Considering various factors, the simulated outlet velocity of 32 kg armature driven by 5-stage coil can reach 202.1 m/s, and the launch efficiency is 33.3%. The influence of various factors on the armature is in line with the theoretical analysis results.
Conclusions
The research content of this paper provides some theoretical support for the design of multi-stage synchronous induction coil gun scheme.
Effect of glass phase in coatings on the vacuum insulation performance of alumina ceramics
Yang Jie, He Jialong, Chen Xin, Liu Ping, Zhao Wei, Li Chen, Qin Zhen, Huang Gang, Xiang Jun, Li Tiantao, Li Jie, Dong Pan, Wang Tao
, Available online  , doi: 10.11884/HPLPB202638.250395
Abstract:
Background
Alumina (Al2O3) ceramics are extensively employed as insulating components in vacuum electronic devices. However, under high voltage, charge accumulation on their surface can easily lead to surface flashover, which severely degrades the insulation performance of the device and affects its operation. Therefore, enhancing the vacuum surface insulation performance of Al2O3 ceramics holds significant academic value and practical implications. Surface coating represents a widely adopted strategy for enhancing the insulation performance of Al2O3 ceramics. Nevertheless, the specific influence of the glass phase within the coating on the insulating properties remains largely unexplored.
Purpose
The present work is dedicated to exploring how the glass phase in coatings affects the vacuum insulation performance of Al2O3 ceramics.
Methods
A Cr2O3-based coating was fabricated on the surface of Al2O3 ceramics, and the effects of the glass phase within the coating on phase structure, surface morphology, secondary electron emission coefficient (SEE), surface resistivity, and vacuum insulation performance of the coated ceramics were systematically investigated.
Results
The results indicate that Al from the substrate diffuses into the coating under high-temperature firing. The content of Cr2O3 phase in the coating exhibits a gradual decrease and eventually disappears with the rise of the glass phase content, causing it to fully react with the ceramic substrate to form Al2-xCrxO3 (0<x<2)、Mg(Al2-yCry)O4 (0<y<2), along with small amounts of ZnAl2O4 and (Na,Ca)Al(Si,Al)3O8. The coating improves the surface grain homogeneity and the density of the ceramic surface, although variations in the glass phase content have a negligible effect on its microstructure. Additionally, the Cr2O3 coating reduces both the SEE coefficient and the surface resistivity of the Al2O3 ceramic. However, as the glass phase content in the coating increases, both the SEE coefficient and surface resistivity of the coated ceramics exhibit a gradual upward trend. The optimal insulation performance is achieved when the glass phase content reaches 20%. At this point, the vacuum surface hold-off strength attains 119.63 kV/cm.
Conclusions
Modulation of the glass phase content in the surface coating enables the tunability of the vacuum surface insulation performance of the Al2O3 ceramics, with the performance improvement stemming from the decreased SEE coefficient and the appropriate surface resistivity.
Development of rep-rate PFN-Marx generator with nanosecond output jitter
Li Fei, Gan Yanqing, Zhang Beizhen, Gong Haitao, Song Falun, Jin Xiao
, Available online  , doi: 10.11884/HPLPB202638.250328
Abstract:
Background
The PFN (pulsed forming network)-Marx generator shows robust capabilities for enhancing the output efficiency and miniaturization level of pulsed power system, and offers the most significant potential for compact and lightweight design.
Purpose
This study aims to develop a compact PFN-Marx generator that is capable of generating high-power pulses with flat-top duration, while maintaining low output jitter.
Methods
A tailored pulsed forming module (PFM) was developed by employing a non-uniform 2-section PFN, aiming for compact design. The influence of key circuit parameters on its output waveform was investigated. A PFN-Marx generator was designed and assembled by employing the PFMs and low-jitter gas switches with planar trigger electrode et al.
Results
The effects of key circuit parameters on the pulse shaping was quantitatively researched, and waveform tailoring of the PFM was achieved. The PFM could output a high-voltage pulse with pulse width and flat-top duration (90%−90%) of about 150 ns and 80 ns, respectively. Once assembled into the Marx generator, it could deliver a 190 kV, 3.4 GW high pulsed power to a 10.6 Ω resistive load, while maintaining a flat-top duration of about 80 ns. When operating at a repetition rate of 50 Hz, it exhibits highly consistent output waveforms, with an output jitter as low as 2.4 ns.
Conclusions
A compact PFN-Marx generator was developed by employing the compact high-voltage PFMs and low-jitter gas switches. It is helpful for the development of compact Marx generator with the required waveform and low output jitter.
Development of a 20 GW compact lightweight Tesla-transformer pulsed power driver
Wang Gang, Zeng Bo, Liu Sheng, Zheng Lei, Guo Zhiqiang, Jia Biao, Liu Yao, Liu Shifei, Shi Dingyuan, Huang Hongyang, Li Jie
, Available online  , doi: 10.11884/HPLPB202638.250362
Abstract:
Background
The rapid development of high-power microwave application technology presents significant challenges for the reliability and installability of pulsed power drivers.
Purpose
The design methodology of a compact, lightweight Tesla-type pulsed power driver based on high-energy-density liquid dielectric Midel 7131 and a dual-width pulse-forming line (PFL) is introduced.
Methods
There was a key breakthrough in the miniaturization of the integrated Tesla transformer and PFL assembly. Through optimization of the electrical length of the short pulse transmission line and its impedance matching characteristics, longstanding challenges associated with conventional single-cylinder PFLs and extended transmission lines using transformer oil dielectrics have been effectively resolved. A high-elevation, high-vacuum oil impregnation technique was developed for the Tesla transformer, successfully mitigating partial discharge in oil-paper insulation systems and thereby enhancing the power rating and operational reliability of the PFL.
Results
The developed pulsed power driver delivers a peak output power of 20 GW, a pulse duration of 50 ns, a pulse flat-top fluctuation of less than 2%, and a maximum repetition rate of 50 Hz. The system has demonstrated stable operation over continuous one-minute durations, accumulating approximately 200 000 pulses with consistent performance. The driver’s overall dimensions are 4.0 m (L)×1.5 m (W)×1.5 m (H), with a total mass of approximately 5 metric tons.
Conclusions
Compared to the conventional 20 GW Tesla-type pulsed power generator, this driver has achieved significant improvements in power density and miniaturization.
300 kV pre-ionization annular-cathode gas switch
Wang Gang, Jia Biao, Liu Shifei
, Available online  , doi: 10.11884/HPLPB202638.250444
Abstract:
Background
The rapid advancement of high-power pulse technology towards practical application imposes higher demands on the self-breakdown stability of high-voltage gas switches.
Purpose
This paper proposes a pre-ionization cathode switch concept, which utilizes an auxiliary annular blade edge to regulate initial electrons and an annular hemisphere to conduct the main current. A 300 kV-level pre-ionization annular cathode gas switch was designed.
Methods
With a switch gap of 35 mm, the field enhancement factor at the blade edge of the pre-ionization switch was designed to be 6.2, resulting in a ratio of 3.2 compared to the field enhancement factor at the hemisphere. Experimental investigations on the breakdown characteristics under microsecond-level pulses were conducted.
Results
The results indicated that in nitrogen at 0.5 MPa and a repetition rate of 1 Hz, the pre-ionization gas switch achieved an average breakdown voltage of 322.5 kV with an amplitude jitter of 0.44%. Compared to a pure annular hemispherical switch, the pre-ionization switch exhibited a 17.6% reduction in breakdown voltage and an 82% decrease in amplitude jitter.
Conclusions
The experimental study demonstrates that this pre-ionization gas switch offers significant advantages in achieving high voltage and low jitter.
High Power Microwave Technology
Investigation of the performance of vertical extrinsic photoconductive switches based on nitrogen-doped diamond
Li Pengyu, Yu Cui, He Zezhao, Liu Jingliang, Chen Xiangjin, Ma Mengyu, Zhou Chuangjie, Liu Qingbin, Yu Hao, Feng Zhihong, Zhou Biao, Zhao Huifeng, Xu Chunliang, You Hengguo, Wang Yi, Zhou Guo, Wang Yinglin, Guo Jianchao, Han Jingwen, Qi Zhihua
, Available online  , doi: 10.11884/HPLPB202638.250424
Abstract:
Background
Diamond is considered a promising candidate for photoconductive semiconductor switches (PCSSs) due to its exceptional material properties.
Purpose
However, the development of high-performance diamond PCSSs is primarily impeded by their high on-state resistance and relatively low breakdown voltage. This study aims to improve the performance of the diamond PCSSs.
Methods
Passivated with Si3N4, vertical PCSSs were fabricated using nitrogen-doped single-crystal diamonds with different doping concentrations and thicknesses. The doping concentrations of diamond samples were analyzed. The photoresponse of the PCSSs was characterized under 532 nm laser excitation over a range of DC bias voltages.
Results
The experimental results showed that the nitrogen-doped diamond PCSSs presented a large on/off ratio (about 1011) along with sub-nanosecond rise and fall times. Among them, the diamond PCSS device with the highest nitrogen doping concentration exhibited the minimum on-state resistance. By reducing the material thickness, a peak output power of 128 kW was achieved at a bias voltage of 4 kV (corresponding to the electric field strength of 110 kV/cm), with the PCSS exhibiting an on-state resistance of 28.9 Ω, further improving the device performance.
Conclusions
Through the design of nitrogen doping concentration, reduction of substrate thickness, and application of Si3N4 passivation, this work successfully developed diamond PCSSs with good performance, paving the way for the development of high-performance diamond PCSSs.
High Power Laser Physics and Technology
Influence of target self-absorption on the energy spectrum and angular distribution of X-ray source
Ni Hui, Wu Sixin, Fan Sijie, Peng Mao, Wen Jiaxing, Zhao Zongqing
, Available online  , doi: 10.11884/HPLPB202638.250369
Abstract:
Background
The self-absorption effect of target materials plays a crucial role in shaping the performance of laser-driven X-ray sources, directly impacting their energy spectrum and angular distribution, which are critical parameters for applications such as high-resolution backlighting and radiographic diagnostics.
Purpose
This study aims to systematically investigate how key parameters, including the electron source position relative to the wire target end-face, the diameter of the wire target, and the atomic number of the target material, affect the energy spectrum and angular distribution of emitted X rays.
Methods
A series of Geant4-based Monte Carlo simulations were performed using a validated wire target model. Key parameters were varied: electron source offset (50–150 μm), wire diameter, and target material (Cu, Mo, W, Au). The simulation model was benchmarked against experimental data obtained from the Xingguang-III laser facility.
Results
The results indicate that varying the electron source position within the studied range has a negligible influence on both the photon energy spectrum and angular distribution. In contrast, increasing the wire diameter leads to enhanced absorption of low-energy photons, resulting in noticeable spectral hardening and a broadening of the angular distribution due to increased multiple scattering. Furthermore, higher-Z target materials (W, Au) significantly enhance the high-energy photon yield but concurrently induce greater angular divergence.
Conclusions
The findings provide quantitative insights into the self-absorption mechanism and its differential impact across parameters. This study offers concrete guidance for optimizing target design: selecting appropriate wire diameter and high-Z materials can tailor the spectral hardness and brightness, while mindful management of angular broadening is necessary for applications requiring high directivity.