File(s) under permanent embargo
A derivative-free method for quantum perceptron training in multi-layered neural networks
conference contribution
posted on 2020-01-01, 00:00 authored by Tariq Khan, Antonio Robles-KellyAntonio Robles-KellyIn this paper, we present a gradient-free approach for training multi-layered neural networks based upon quantum perceptrons. Here, we depart from the classical perceptron and the elemental operations on quantum bits, i.e. qubits, so as to formulate the problem in terms of quantum perceptrons. We then make use of measurable operators to define the states of the network in a manner consistent with a Markov process. This yields a Dirac–Von Neumann formulation consistent with quantum mechanics. Moreover, the formulation presented here has the advantage of having a computational efficiency devoid of the number of layers in the network. This, paired with the natural efficiency of quantum computing, can imply a significant improvement in efficiency, particularly for deep networks. Finally, but not least, the developments here are quite general in nature since the approach presented here can also be used for quantum-inspired neural networks implemented on conventional computers.
History
Event
Neural Information Processing. International Conference (27th : 2020 : Online from Bangkok, Thailand)Volume
1333Series
Neural Information Processing International ConferencePagination
241 - 250Publisher
SpringerLocation
Bangkok, ThailandPlace of publication
Cham, SwitzerlandPublisher DOI
Start date
2020-11-18End date
2020-11-22ISSN
1865-0929eISSN
1865-0937ISBN-13
9783030638221Language
engPublication classification
E1 Full written paper - refereedEditor/Contributor(s)
H Yang, K Pasupa, A Leung, J Kwok, J Chan, I KingTitle of proceedings
ICONIP 2020 : Proceedings of the 27th International Conference on Neural Information ProcessingUsage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC