Just Accepted
Display Method:
, Available online , doi: 10.11884/HPLPB202638.250181
Abstract:
Background Purpose Methods Results Conclusions
The surface flashover in SF6 under nanosecond pulses involves complex physical processes, and accurately predicting the surface flashover voltage of insulating media in such environments constitutes a critical challenge for the design of high-voltage pulsed power equipment and the evaluation of insulation reliability. Compared with traditional AC or DC voltages, the extremely short rise time and high amplitude of nanosecond pulses lead to significant space charge effects and distinct discharge development mechanisms, thereby posing severe challenges to prediction models based on classical theories. In recent years, with the rapid improvement of computer computing power and breakthroughs in artificial intelligence algorithms, data-driven machine learning methods have demonstrated great potential in solving complex nonlinear insulation problems.
Targeting this specific challenge under nanosecond pulses, this paper selects four algorithms, including support vector machine (SVM), multi-layer perceptron (MLP), random forest (RF), and extreme gradient boosting (XGBoost), to train and predict flashover voltage data under different experimental conditions within the multi-scale distance range of 15 mm to 500 mm.
First, external operating conditions such as electric field distribution, voltage waveform, and gas pressure were parametrically extracted and characterized. The Pearson correlation coefficient was employed to conduct a correlation analysis on the aforementioned characteristic parameters, and ultimately 22 feature quantities were screened out as the model inputs. Subsequently, the Bayes hyperparameter optimization algorithm was utilized to perform hyperparameter optimization for four types of algorithms, and the 10-fold cross-validation method was adopted to select the optimal hyperparameter combination for each algorithm. After that, the sample training set was input into the four algorithms for training, and each algorithm was validated on the test set.
The four algorithms demonstrated overall good performance. Among them, Random Forest (RF) and XGBoost exhibited excellent performance on the training set but poor performance on the validation set, which is likely a manifestation of overfitting in ensemble learning and indicates weak generalization ability. Support Vector Machine (SVM) achieved relatively outstanding performance on both the training set and the validation set. Furthermore, the generalization performance of the SVM and XGBoost algorithms was validated using data outside the sample dataset. The results showed that SVM yielded better prediction outcomes on the data outside the sample dataset.
SVM achieved high prediction accuracy on the training set, test set, and data outside the sample dataset, making it more suitable for the insulation design of electromagnetic pulse simulation devices.
, Available online , doi: 10.11884/HPLPB202638.250248
Abstract:
Background Purpose Methods Results Conclusions
Pulse step modulation (PSM) high-voltage power supply is widely used in the heating systems of the Experimental Advanced Superconducting Tokamak (EAST). This power supply adopts a modular topology, where the high output voltage is generated by superimposing the outputs of multiple independent DC power modules. In conventional designs, input over-voltage and under-voltage protection for each power module is achieved by installing individual voltage sensors across the input capacitors.
However, this method requires a large number of voltage sensors, which significantly increases system monitoring costs and complicates the hardware detection circuitry. To address these limitations, this study aims to develop a sensorless voltage measurement (SVM) method capable of estimating the input voltage of each power module using only a single voltage sensor on the output side.
This paper first introduces the circuit topology of the PSM high-voltage power supply and provides a detailed analysis of its control strategy. Building on this foundation, a novel sensorless voltage detection technique is proposed to estimate the input voltage of each power module. The method utilizes only one voltage sensor installed at the output side of the PSM high-voltage power supply to collect voltage signals, from which the input voltages of individual modules are derived through algorithmic processing.
To validate the proposed method, a model was constructed and tested based on the RT-LAB real-time simulation platform. Experimental results demonstrate that the SVM technique can effectively estimate input voltages, thereby confirming the feasibility of the proposed method.
The study concludes that the SVM method not only reduces the number of required sensors and associated costs but also simplifies the system architecture while maintaining reliable module-level voltage monitoring. The findings provide valuable insights for the design of modular power supplies in large-scale experimental setups and suggest potential applications in other multi-module power electronic systems.
, Available online , doi: 10.11884/HPLPB202638.250204
Abstract:
Background Purpose Methods Results Conclusions
The PFN-Marx pulse driver with millisecond charging holds significant potential for achieving lightweight and miniaturized systems. To ensure its long-life, stable, and reliable operation, the development of a triggered gas gap switch represents a key technological challenge.
This study aims to address issues related to the large dispersion in operating voltage and rapid erosion of the trigger electrode under millisecond charging conditions.
Based on the operating mechanism of the corona-stabilized switch, a corona-based gas-triggered switch was developed. Investigations were conducted on its structural design, electrostatic field simulation, trigger source development, operational voltage range, time delay, and jitter characteristics. These efforts resolved the problem of frequent self-breakdown or trigger failure under millisecond charging.
Experimental results demonstrate that, using SF6 as the working gas at a pressure of 0.6 MPa, the maximum operating voltage of the triggered switch reaches 90 kV. Under conditions of 84 kV operating voltage, 20 Hz repetition frequency, 500 pulses per burst, and without gas replacement, the switch was tested continuously for 100,000 pulses. Only one self-breakdown incident occurred during this period, resulting in a self-breakdown rate of less than 0.01‰.
The triggered switch developed in this study meets the design requirements and effectively resolves the instability issues under millisecond charging conditions, thereby providing a foundation for future engineering applications.
, Available online , doi: 10.11884/HPLPB202638.250018
Abstract:
Background Purpose Methods Results Conclusions
Field-programmable gate array (FPGA)-based time-to-digital converters (TDCs) have been extensively employed for high-precision time interval measurements, in which picosecond-level resolution is often required. Among existing approaches, the delay-line method remains widely used, while the system clock frequency and the delay chain design are recognized as the primary factors affecting resolution and linearity.
The objective of this study is to develop a multi-channel FPGA-TDC architecture that integrates multiphase clocking with delay-line interpolation, thereby lowering the operating frequency, improving linearity, and reducing hardware resource utilization, while maintaining high measurement resolution.
A two-stage interpolation scheme was introduced, where fine time measurement cells were implemented through the combination of multiphase clocks and shortened delay chains. This configuration mitigates the accumulation of nonlinearity in the delay elements and reduces the scale of thermometer-to-binary encoders, resulting in decreased logic overhead. The proposed TDC was implemented on a Xilinx ZYNQ-7035 device, and its performance was evaluated within a measurement range of 0–16000 ps.
The experimental evaluation demonstrated that a time resolution better than 4 ps was achieved. The measured differential nonlinearity (DNL) was in the range of −1 least significant bit (LSB) to +7 LSB, while the integral nonlinearity (INL) ranged from −2 LSB to +14 LSB. Compared with conventional architectures, the proposed scheme shortens the delay chain length by several times at the same operating frequency, and achieves lower frequency with the same chain length.
The proposed two-stage interpolation architecture not only enhances resolution and linearity but also significantly reduces logic resource consumption, demonstrating strong application potential.
, Available online , doi: 10.11884/HPLPB202638.250113
Abstract:
Background Purpose Methods Results Conclusions
Electromagnetic pulse welding (EMPW) is an emerging solid-state welding technology. Its application in connecting power conductors and terminals can effectively enhance joint reliability. However, EMPW joints exhibit unbonded intermediate zones, and their tensile performance requires improvement, which severely restricts the application of EMPW technology in power conductor connections.
To address this, this paper proposes a split field shaper structure to further improve the bonding performance of electromagnetic pulse welded joints.
To validate the effectiveness of the proposed split field shaper structure, this paper combines equivalent circuit analysis, finite element simulation models, and mechanical property testing of experimental specimens to demonstrate the efficacy of the proposed method.
Theoretical analysis of both the split and traditional field shaper provides the basis for the split field shaper structure design. Finite element simulation models reveal the influence patterns of the field shaper structure on the electromagnetic and motion parameters of the joint deformation zone. Mechanical property tests validated the split field shaper's enhancement of joint bonding performance. Experimental results demonstrate that, compared to the integrated field shaper, joints prepared using the segmented field shaper exhibit a 22.73% increase in tensile performance and a 2.68 mm extension in the total weld length.
The proposed split field shaper successfully enhances joint mechanical properties relative to conventional field shaper while maintaining overall dimensional consistency.
, Available online , doi: 10.11884/HPLPB202638.250187
Abstract:
Airborne Synthetic Aperture Radar (SAR) is vulnerable to continuous wave (CW) interference in complex electromagnetic environments, leading to significant degradation in imaging quality. Its susceptibility to front-door coupling electromagnetic effects is a critical concern. This study aims to systematically investigate the impact patterns and physical mechanisms of single-frequency CW interference on airborne SAR imaging through equivalent injection experiments. It further seeks to establish a robust evaluation method for interference effects. Equivalent injection testing was employed to simulate CW interference susceptibility. The interference effect was evaluated using a composite SAR image quality factor integrating the Pearson Correlation Coefficient (PCC), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Detailed analysis of the radio frequency (RF) front-end response and Analog-to-Digital Converter (ADC) behavior under interference was conducted. Significant interference effects were observed when the interfering frequency fell within the receiver's hardware passband (8.5-9.5 GHz) and the Jammer-to-Signal Ratio reached 15 dB. While the RF front-end exhibited no significant nonlinearity, the interference induced a nonlinear response specifically within the internal Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) of the ADC sampling chip. This nonlinearity generated additional DC components and harmonics, identified as the fundamental physical cause of characteristic interference stripes and overall SAR image quality degradation. The generation of DC offsets and harmonic distortion within the ADC's MOSFET circuitry is the root physical mechanism behind SAR image degradation under CW interference within the specified band and JSR threshold. This research provides a solid theoretical foundation for designing electromagnetic interference (EMI) countermeasures in airborne SAR systems, thereby enhancing their robustness and imaging capability in challenging complex electromagnetic environments.
Airborne Synthetic Aperture Radar (SAR) is vulnerable to continuous wave (CW) interference in complex electromagnetic environments, leading to significant degradation in imaging quality. Its susceptibility to front-door coupling electromagnetic effects is a critical concern. This study aims to systematically investigate the impact patterns and physical mechanisms of single-frequency CW interference on airborne SAR imaging through equivalent injection experiments. It further seeks to establish a robust evaluation method for interference effects. Equivalent injection testing was employed to simulate CW interference susceptibility. The interference effect was evaluated using a composite SAR image quality factor integrating the Pearson Correlation Coefficient (PCC), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Detailed analysis of the radio frequency (RF) front-end response and Analog-to-Digital Converter (ADC) behavior under interference was conducted. Significant interference effects were observed when the interfering frequency fell within the receiver's hardware passband (8.5-9.5 GHz) and the Jammer-to-Signal Ratio reached 15 dB. While the RF front-end exhibited no significant nonlinearity, the interference induced a nonlinear response specifically within the internal Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) of the ADC sampling chip. This nonlinearity generated additional DC components and harmonics, identified as the fundamental physical cause of characteristic interference stripes and overall SAR image quality degradation. The generation of DC offsets and harmonic distortion within the ADC's MOSFET circuitry is the root physical mechanism behind SAR image degradation under CW interference within the specified band and JSR threshold. This research provides a solid theoretical foundation for designing electromagnetic interference (EMI) countermeasures in airborne SAR systems, thereby enhancing their robustness and imaging capability in challenging complex electromagnetic environments.
, Available online , doi: 10.11884/HPLPB202537.250019
Abstract:
Background Purpose Methods Results Conclusions
Fiber laser coherent beam combining technology enables high-power laser output through precise phase control of multiple laser channels. However, factors such as phase control accuracy, optical intensity stability, communication link reliability, and environmental interference can degrade system performance.
This study aims to address the challenge of anomaly detection in phase control for large-scale fiber laser coherent combining by proposing a novel deep learning-based detection method.
First, ten-channel fiber laser coherent combining data were collected, system control processes and beam combining principles were analyzed, and potential anomalies were categorized to generate a simulated dataset. Subsequently, an EMA-Transformer network model incorporating a lightweight Efficient Multi-head Attention (EMA) mechanism was designed. Comparative experiments were conducted to evaluate the model's performance. Finally, an eight-beam fiber laser coherent combining experimental setup was established, and the algorithm was deployed using TensorRT for real-time testing.
The proposed algorithm demonstrated significant improvements, achieving approximately 50% higher accuracy on the validation set and a 2.20% enhancement on the test set compared to ResNet50. In practical testing, the algorithm achieved an inference time of 2.153 ms, meeting real-time requirements for phase control anomaly detection.
The EMA-Transformer model effectively addresses anomaly detection in fiber laser coherent combining systems, offering superior accuracy and real-time performance. This method provides a promising solution for enhancing the stability and reliability of high-power laser systems.
, Available online , doi: 10.11884/HPLPB202638.250112
Abstract:
Background Purpose Methods Results Conclusions
Envelope instabilities and halo formation are critical challenges limiting beam quality in space-charge-dominated beams of low-energy superconducting proton linear accelerators. The dynamic evolution of focusing parameters during acceleration and the intrinsic role of double-period focusing structures in the low-energy region in these phenomena remain insufficiently explored.
This study aims to systematically investigate the influence of dynamically evolving focusing parameters on envelope instabilities, reveal the relationship between double-period focusing structures and halo formation, and achieve localized breakthroughs of the zero-current phase advance σ0 beyond 90° while optimizing beam quality.
A theoretical model was established via the second-order even-mode expansion of the Vlasov–Poisson equations. Multiple evolution schemes were designed, and multi-particle simulations were performed on low-energy proton beams (normalized RMS emittance: 0.2–0.4 π·mm·mrad). The particle–core model was used to compare halo formation mechanisms between quasi-periodic and double-period structures, with two-dimensional and three-dimensional models verifying key findings.
For weak space-charge effects (high η), σ0 can exceed 90° without degrading beam quality; strong space-charge effects (low η) induce resonances and emittance growth, especially in doublet structures. Double-period structures cause envelope instability even with σ0 < 90° per cell, being more prone to halo formation via the 2∶1 resonance. Longitudinal beam size variations alter core charge density (a new halo mechanism), and higher-order resonances contribute significantly. The number of short-period cells (N) correlates inversely with resonance probability.
Dynamic focusing parameters and double-period structures strongly affect envelope instabilities and halo formation. The 2∶1 resonance and longitudinal-transverse coupling are key halo mechanisms. σ0 breakthrough beyond 90° is feasible under weak space-charge conditions, and increasing N reduces resonance risk. These findings provide theoretical and numerical support for beam quality optimization in low-energy superconducting proton linac.
, Available online , doi: 10.11884/HPLPB202537.250194
Abstract:
Background Purpose Methods Results Conclusions
The Low Energy High Intensity High Charge State Heavy Ion Accelerator Facility (LEAF) is a national scientific instrument developed by the Institute of Modern Physics, Chinese Academy of Sciences, to provide high-current, high-charge-state, full-spectrum low-energy heavy ion beams for interdisciplinary studies.
To meet research needs in nuclear astrophysics, atomic and molecular physics, and nuclear materials, LEAF offers tunable energies from 0.3 to 0.7 MeV/u and supports continuous-wave acceleration for ions with A/q = 2-7.
This paper presents an overview of the construction progress, key design parameters, and operational performance of the facility, summarizing recent achievements and outlining future development goals.
The paper introduces the system architecture—comprising the 45 GHz superconducting ECR ion source FECR, RFQ, IH-DTL, and terminal beamlines—and describes beam commissioning and diagnostic approaches.
LEAF has successfully achieved stable acceleration of multi-species, high-charge-state heavy ion beams with intensities up to 1 emA. It has delivered more than 13,000 hours of beam time, realized efficient operation of“cocktail”multi-ion beams, and established a high-current, low-energy-spread 12C2+ beamline for precise reaction measurements in the Gamow window.
These results verify LEAF’s excellent beam quality and operational reliability. Planned upgrades—including an extended energy tuning range and triple-ion beam capability—will further enhance its role as a frontier platform for experimental studies in nuclear astrophysics and radiation effects in advanced materials.
, Available online , doi: 10.11884/HPLPB202537.250038
Abstract:
Background Purpose Methods Results Conclusions
To enhance the performance of the next-generation X-ray free electron laser (XFEL), a photocathode RF gun capable of providing the required high-quality electron beam with a small emittance has been a significant research objective. In comparison to the conventional L-band or S-band RF gun, the C-band RF gun features a higher acceleration gradient above 150 MV/m and the ability to generate a small-emittance beam. Low-emittance electron beams are critical for enhancing XFEL coherence and brightness, driving demand for advanced RF gun designs. For a bunch charge of 100 pC, a normalized emittance of less than 0.2 mm.mrad has been expected at the gun exit.
This paper presents the design of an emittance measurement device, which can accurately measure such a small emittance at the C-band RF gun exit to ensure beam quality for XFEL applications.
To achieve the desired accuracy, the primary parameters —slit width, slit thickness, and beamlet-drift length—have been systematically optimized through numerical simulations using Astra and Python based on the single-slit-scan method. Dynamic errors, including motor displacement and imaging resolution, were quantified to ensure measurement reliability.
The evaluations indicate that the measurement error of 95% emittance is less than 5%, employing a slit width of 5 μm, a slit thickness of 1 mm, and a beamlet-drift length of 0.11 m under dynamic conditions.
This optimized emittance measurement device supports precise beam quality characterization for XFELs, offering potential for further advancements in electron beam diagnostics.
, Available online , doi: 10.11884/HPLPB202638.250171
Abstract:
Background Purpose Methods Results Conclusions
The use of high-power lasers for Wireless Power Transmission (WPT) in space-based solar power stations poses a potential risk to orbiting spacecraft. Misalignment or system failures could cause the laser beam to irradiate a spacecraft's solar array, potentially inducing discharge phenomena that threaten the spacecraft's safety. Existing research has primarily focused on the thermal damage effects of lasers on solar arrays, while studies on the characteristics of laser-induced discharge remain insufficient.
This study aims to systematically investigate the influence of two key laser parameters, energy and wavelength, on the discharge characteristics of spacecraft solar arrays. The goal is to reveal the underlying mechanisms of laser-induced discharge, thereby providing a theoretical and experimental basis for the safe application of high-power laser wireless energy transmission technology.
The mechanism of laser-induced solar array discharge was analyzed based on laser-induced plasma theory and discharge mechanisms within the Low Earth Orbit (LEO) plasma environment. Guided by this theoretical framework, the experimental parameters for the laser-induced spacecraft solar array discharge test were determined. The experiment analyzed the probability of discharge induced by a 532 nm laser at different energy levels and acquired discharge duration data. Probability-time distribution curves were established, and the probability functions for discharge duration under different laser energies were obtained by fitting with a double Poisson distribution. Furthermore, a comparative study was conducted on the peak discharge current and the duration probability functions induced by 532 nm and 266 nm wavelength lasers at the same energy level.
The experimental results demonstrate that higher laser energy leads to a greater probability of induced discharge and longer discharge durations. Shorter laser wavelengths result in a lower discharge threshold and induce discharge events with higher peak currents. The discharge risk parameter increases significantly with shorter wavelength and higher energy.
Laser energy and wavelength are critical factors affecting the discharge risk of solar arrays. Short-wavelength, high-energy lasers pose a greater threat to solar array safety. The findings of this study provide important guidance for selecting laser parameters in WPT systems and for designing protective measures for solar arrays.
, Available online , doi: 10.11884/HPLPB202537.250150
Abstract:
Background Purpose Methods Results Conclusions
High power GaN-based blue diode lasers have wide application prospects in industrial processing, copper material welding, 3D printing, underwater laser communication and other technical fields. The Chip On Submount (COS unit) packaged in the heat sink is a kind of single component that can be applied to the fabrication of high power GaN-based blue diode lasers. The device has the advantages of low thermal resistance and small size.
However, due to the low reliability of this device, the industrial application of this COS single component in high power GaN-based blue diode lasers is still limited to a certain extent, and its performance degradation factors need to be studied.
In this paper, based on the optical microscopy、scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) the degradation factors of high power blue light COS components were studied.
Finally, the experimental study and analysis show that the performance degradation factors of blue light diode laser chip are mainly related to GaN matrix material defects, cavity surface surplus deposition and photochemical corrosion factors, and through experiments, it is compared that high power blue light COS single component can improve its reliability by hermetic packaging and provide a reference for the subsequent engineering application of high power blue COS units.
Finally, experimental research and analysis indicate that the performance degradation factors of high-power blue laser diodes (LDs) are primarily related to defects in the GaN substrate material, foreign matter deposition on the cavity surface, and photochemical corrosion factors. Comparative experiments further reveal that the threshold current growth rate of LDs with gas sealing (~0.14 mA/h) is lower than that of non-gas-sealed LDs (~0.27 mA/h). This demonstrates that gas-sealed packaging of high-power blue LD COS unit devices can enhance their reliability.
, Available online , doi: 10.11884/HPLPB202638.250182
Abstract:
Achieving high-efficiency and high-power operation under low magnetic fields is an important development trend for high-power microwave sources. In order to enhance the efficiency of high-power microwave source under low guiding magnetic fields, a high-efficiency coaxial dual-mode relativistic Cherenkov oscillator (RCO) under a low guiding magnetic field is proposed. The RCO works in both coaxial quasi-TEM mode and TM01 mode and realizes high-efficiency output in low magnetic field (<0.4 T). In particle-in-cell simulation, when the guiding magnetic field is only 0.35 T, the RCO achieves a microwave output of 3 GW with a beam-wave conversion efficiency of 40%. At the same time, aiming at the RF breakdown phenomenon in the experiment, the power capacity is improved by increasing the number of slow wave structure periods, which is verified by both simulation and experiment. In the experiment, under a magnetic field of 0.37 T, the output power is 2.85 GW with a pulse width of 57 ns and conversion efficiency of 34%. The experimental results obtained under the low magnetic field provide strong support for the development of miniaturization of high-power microwave systems.
Achieving high-efficiency and high-power operation under low magnetic fields is an important development trend for high-power microwave sources. In order to enhance the efficiency of high-power microwave source under low guiding magnetic fields, a high-efficiency coaxial dual-mode relativistic Cherenkov oscillator (RCO) under a low guiding magnetic field is proposed. The RCO works in both coaxial quasi-TEM mode and TM01 mode and realizes high-efficiency output in low magnetic field (<0.4 T). In particle-in-cell simulation, when the guiding magnetic field is only 0.35 T, the RCO achieves a microwave output of 3 GW with a beam-wave conversion efficiency of 40%. At the same time, aiming at the RF breakdown phenomenon in the experiment, the power capacity is improved by increasing the number of slow wave structure periods, which is verified by both simulation and experiment. In the experiment, under a magnetic field of 0.37 T, the output power is 2.85 GW with a pulse width of 57 ns and conversion efficiency of 34%. The experimental results obtained under the low magnetic field provide strong support for the development of miniaturization of high-power microwave systems.
, Available online , doi: 10.11884/HPLPB202638.250176
Abstract:
Background Purpose Methods Results Conclusions
Global Navigation Satellite System (GNSS) compatible receiver antennas—integrating multiple global navigation constellations—feature more complex front-door radio frequency (RF) channel architectures than single-constellation GPS antennas. High power microwave (HPM) effect research on GNSS compatible antennas with complex RF front-end were rarely seen.
To investigate the GNSS compatible antenna HPM effects, radiation experiments on a type of GNSS-compatible receiver antenna were carried out, a customized characterization approach was designed to analyze the damaged antennas and identify the specific failed components within the complex RF front-end.
The RF front-end structure of the antenna was analyzed, revealing a design with two separate RF channels (around 1.25 GHz and 1.6 GHz), each with a dedicated first-stage low-noise amplifier (LNA), followed by shared second and third-stage LNAs. The performance of these components was characterized employing a customized “hot measurement” setup, which using a vector network analyzer incorporating a test antenna and a DC blocker.
The measurements pinpointed the failure to the first-stage LNA (Q6) of the RF channel corresponding to the HPM source frequency of 1.6 GHz. This specific component showed significant degradation or complete failure. In contrast, the first-stage LNA (Q4) of the other channel (~1.25 GHz) and the shared subsequent amplifier stages (Q2 and Q1) remained unaffected. The root cause was confirmed by replacing the damaged Q6 LNA, which successfully restored the antenna’s full functionality.
This work demonstrates that in a multi-channel RF front-end, HPM effects can be highly localized, selectively damaging the first-stage amplifier of the channel cover the HPM frequency while sparing other sections. The findings provide valuable insights into the HPM vulnerability of complex RF systems and offer a reference methodology for related effect analysis.
, Available online , doi: 10.11884/HPLPB202638.250209
Abstract:
Background Purpose Methods Results Conclusions
Gyrotron traveling wave tube (Gyro-TWT) is a vacuum electronic device with broad application prospects. Magnetron injection gun (MIG) is one of the core components of gyro-TWT, and its performance directly determines the success or failure of gyro-TWT. From the current research results on MIGs at home and abroad, it can be seen that the working voltage and current of existing MIGs are mostly low, and the velocity spread is generally high, which cannot meet the requirements of future megawatt-class gyro-TWT for MIG.
In order to meet the requirement for MIG with high voltage, high current, and low electron beam velocity spread in the development of megawatt-class high-power gyro-TWT, this paper presents a novel design scheme for a single anode electron gun.
The novel electron gun scheme introduces a curved cathode structure to reduce the velocity spread of the electron beam, while effectively increasing the cathode emission band area and reducing the cathode emission density.
The results of PIC simulation show that under the working conditions of 115 kV and 43 A, the designed electron gun has a transverse to longitudinal velocity ratio of 1.05, a velocity spread of 1.63%, and a guiding center radius of 3.41 mm. The thermal analysis results indicate that the MIG can heat the cathode to 1050 ℃ at a power of 76 W.
The simulation and thermal analysis results indicate that the designed MIG meets the design expectations and satisfies the requirements of high voltage, high current, and low electron beam velocity spread for megawatt level gyro-TWT.
Study on the influence of electromagnetic parameters in large-orbit gyrotron electron gun in Ka-band
, Available online , doi: 10.11884/HPLPB202537.250185
Abstract:
Background Purpose Methods Results Conclusions
Gyrotron traveling-wave tubes (gyro-TWTs), based on the electron cyclotron maser mechanism, are extensively utilized in critical military domains such as high-resolution millimeter-wave imaging radar, communications, and electronic countermeasures. Experimental observations indicate that when the cathode magnetic field exceeds a specific range, occur the electron beam bombardment of the tube wall.
In order to reduce damage risks to the electron gun during experiments, provide guidance for identifying optimal operating points in experimental testing of Ka-band second-harmonic large-orbit gyrotron traveling wave tube (gyro-TWT).
This paper introduces the formation theory of large-orbit electron guns and analyzes the motion of electron beams in non-ideal CUSP magnetic fields. Using CST Particle Studio and E-gun software modeled and simulated the electron gun. The effects of magnetic fields, operating voltage, and beam current on the quality and trajectories of large-orbit electron beams were investigated.
As the absolute value of the cathode magnetic field increases, both the velocity ratio and the Larmor radius increase, while the velocity spread decreases. With an increase in voltage, the velocity ratio decreases, and the Larmor radius drops to a minimum at a certain point before rising again. Variations in current have limited impact on the Larmor radius and the transverse-to-longitudinal velocity ratio; however, the electron-wave interaction efficiency reaches its maximum at the optimal operating current.
The study demonstrates that excessively low operating voltage leads to high transverse-to-longitudinal velocity ratios (α) and electron back-bombardment phenomena, which detrimentally affect the cathode. Therefore, within this voltage range (20–40 kV), the power supply voltage should be increased promptly. Conversely, excessively high reverse magnetic fields at the cathode result in oversized electron cyclotron radius, causing beam-wall bombardment and gun damage. To prevent electron beam bombardment of the tube wall, the cathode magnetic field should not exceed -85 Gs.
, Available online , doi: 10.11884/HPLPB202638.250155
Abstract:
Background Purpose Methods Results Conclusions
Currently, the bias power supplies in high-voltage electron beam welders, both domestically and internationally, are suspended at a negative high voltage. The output voltage regulation is achieved by sampling the operating current in the high-voltage power circuit. The sampled current signal undergoes multi-stage conversion before being sent to the bias power supply, which then adjusts its output voltage based on the feedback current. This adjusted output voltage, in turn, alters the current in the high-voltage circuit. Since the bias power supply is an inverter-based power source, its response and adjustment cycles are relatively long, and precise step-wise regulation is challenging. Consequently, this leads to significant beam current ripple, poor stability, and inadequate beam current reproducibility, failing to meet the requirements of precision welding for beam current stability and low fluctuation.
This paper aims to develop a bias power supply with an adjustable DC output voltage ranging from −100 V to −2 kV, featuring low voltage ripple and high voltage stability. The bias power supply can be connected in series within the high-voltage circuit, enabling rapid adjustment and precise control of the operating beam current through a fast closed-loop feedback control system. Additionally, the bias power supply must operate reliably during load arcing of the electron gun.
The design incorporates absorption and protection methods to address the issue of electron gun load arcing damaging the bias power supply. By connecting the bias power supply in series within the high-voltage circuit and feeding back the operating current in the bias power supply loop, the output voltage (bias cup voltage) is adjusted. The bias cup voltage adaptively regulates according to the beam current magnitude, achieving real-time rapid tracking and fine control of the operating beam current.
A bias power supply was developed with an adjustable DC output voltage from −100 V to −2 kV, featuring a ripple voltage of ≤0.1% across the entire voltage range, voltage stability better than 0.1%, and an output current greater than 3 mA. When applied to a −150 kV/33 mA high-voltage electron beam welder, it achieved a beam current ripple of ±0.19%, beam current stability better than ±5 μA, and beam current reproducibility of ±0.04%.
Based on the methods of absorption, protection, and adaptive regulation of the bias cup voltage according to the beam current magnitude, a novel bias power supply for high-voltage electron beam welders has been successfully developed. This solution addresses the issues of large beam current ripple, poor stability, and inadequate reproducibility in high-voltage electron beam welding, providing an effective approach for high-stability, precision-controllable welding.
, Available online , doi: 10.11884/HPLPB202638.250049
Abstract:
Background Purpose Methods Results Conclusions
The motion and trapping of high-energy charged particles in the radiation belts are significantly influenced by the structure of Earth's magnetic field. Utilizing different geomagnetic models in simulations can lead to varying understandings of particle loss mechanisms in artificial radiation belts.
This study aims to simulate and compare the trajectories and loss processes of 10 MeV electrons injected at different longitudes and L-values under the centered dipole, eccentric dipole, and International Geomagnetic Reference Field (IGRF) models, to elucidate the influence of geomagnetic field models on particle trapping and loss, particularly within the South Atlantic Anomaly (SAA) region.
The particle loss processes during injection were simulated using the MAGNETOCOSMIC program within the Geant4 Monte Carlo software. Simulations were conducted for 10 MeV electrons at various longitudes and L-values. The trajectories, loss cone angles, and trapping conditions were analyzed and compared among the three geomagnetic models.
The centered dipole model yielded relatively regular and symmetric electron drift trajectories. asymmetry was observed in the eccentric dipole model. The IGRF model produced the most complex and irregular trajectories, best reflecting the actual variability of Earth's magnetic field. Regarding the relationship between loss cone angle and L-value, the IGRF model exhibited the largest loss cone angles, indicating the most stringent conditions for particle trapping. Furthermore, injection longitude significantly influenced loss processes, with electrons approaching the center of the SAA being most susceptible to drift loss.
The choice of geomagnetic model critically impacts the simulation of particle dynamics in artificial radiation belts. The IGRF model, offering the most detailed field representation, predicts the strictest trapping conditions and most realistic loss patterns, especially within the SAA. These findings enhance the understanding of particle trapping mechanisms and are significant for space environment research and applications.
, Available online , doi: 10.11884/HPLPB202638.250238
Abstract:
Background Purpose Method Results Conclusions
Accurately simulating the gas-solid coupled heat transfer in high-temperature pebble-bed reactors is challenging due to the complex configuration involving tens of thousands of fuel pebbles. Conventional unresolved CFD-DEM methods are limited in accuracy by their requirement for coarse fluid grids, whereas fully resolved simulations are often prohibitively expensive.
This study aims to develop a semi-resolved function model suitable for fine fluid grids to enable accurate and efficient coupled thermal-fluid simulation in pebble beds.
A Gaussian kernel-based semi-resolved function was introduced to smooth physical properties around particles and compute interphase forces via weighted averaging. The key parameter, the dimensionless diffusion time, was optimized through comparison with Voronoi cell analysis. The model was implemented in an open-source CFD-DEM framework and validated against both a single-particle settling case and a fluidized bed experiment.
Voronoi cell analysis determined the optimal diffusion time to be 0.6. Exceeding this value over-smoothens the spatial distribution and obscures local bed features. The single particle settling case demonstrated excellent agreement with experimental terminal velocities under various viscosities. The fluidized bed simulation successfully captured porosity distribution and the relationship between fluid velocity and particle density, consistent with experimental data. Application to HTR-10 pebble bed thermal-hydraulics showed temperature distributions aligning well with the SA-VSOP benchmark.
The proposed semi-resolved function model effectively overcomes the grid size limitation of traditional CFD-DEM, accurately capturing interphase forces in sub-particle-scale grids. It provides a high-precision and computationally viable scheme for detailed thermal-fluid analysis in advanced pebble-bed reactors.
, Available online , doi: 10.11884/HPLPB202638.250243
Abstract:
Background Purpose Methods Results Conclusions
The traditional Monte-Carlo (MC) method faces an inherent trade-off between geometric modeling accuracy and computational efficiency when addressing real-world irregular terrain modeling.
This paper proposes a fast MC particle transport modeling method based on irregular triangular networks for complex terrains, addressing the technical challenge of achieving adaptive and efficient MC modeling under high-resolution complex terrain scenarios.
The methodology consists of three key phases: First, high-resolution raster-format terrain elevation data are processed through two-dimensional wavelet transformation to precisely identify abrupt terrain variations and extract significant elevation points. Subsequently, the Delaunay triangulation algorithm is employed to construct TIN-structured terrain models from discrete point sets. Finally, the MCNP code's "arbitrary polyhedron" macrobody definition is leveraged to establish geometric planes, with Boolean operations applied to synthesize intricate geometric entities, thereby realizing rapid automated MC modeling for high-resolution complex terrains.
Results demonstrate that the proposed method accurately reproduces terrain-induced effects on radiation transport, achieving high-fidelity simulations while significantly compressing the number of cells and enhancing computational efficiency.
This methodology represents a novel approach for large-scale radiation field modeling under complex terrain constraints, demonstrating broad applicability to MC particle transport simulations in arbitrary large-scale complex terrain scenarios.
, Available online , doi: 10.11884/HPLPB202537.250166
Abstract:
Background Purpose Methods Results Conclusions
System-Generated Electromagnetic Pulse (SGEMP) arises from electromagnetic fields produced by photoelectrons emitted from spacecraft surfaces under intense X-ray or γ -ray irradiation. Cavity SGEMP, a critical subset of SGEMP, involves complex interactions within enclosed structures. While scaling laws have been established for external SGEMP, their applicability to cavity SGEMP remains debated due to photon spectrum distortion caused by variations in cavity wall thickness et al.
This study aims to validate the applicability of SGEMP scaling laws to cavity SGEMP by proposing a canonical transformation method that maintains constant wall thickness. The goal is to provide a theoretical basis for analyzing cavity SGEMP mechanisms and designing laboratory-scale experiments.
A cylindrical cavity model with an aluminum wall was irradiated by a laser-produced plasma X-ray source. Numerical simulations were performed using a 3D particle-in-cell (PIC) code under two conditions: an original model and a 10×scaled-up model. Key parameters, including grid size and time steps, were scaled according to the derived laws. The wall thickness was kept constant to avoid photon spectrum distortion. Simulations compared electric fields, magnetic fields, charge densities, and current distributions between the two models.
The original and scaled-up models exhibited identical spatial distributions of electromagnetic fields and charge densities. Specific validation results include: Peak electric fields decreased from 2.0 MV/m (original) to 200 kV/m (scaled-up).Peak magnetic fields reduced from 0.8×10−3 T (original) to 0.8×10−4 T (scaled-up), Charge densities maxima dropping from 6.0×10−3 /m3 to 6.0×10−5 /m3. Waveform shapes for currents and fields remained unchanged across models. These results all adhere to the scaling laws.
The scaling laws for SGEMP are validated for cavity SGEMP when wall thickness remains unchanged. This work provides a universal theoretical tool for cavity SGEMP studies and reliable scaling criteria for laboratory experiments.

Email alert
RSS
