In digital hearing aid, acoustic feedback canceller is an important block to minimize the echo generated by the microphone. The conventional digital signal processing (DSP) algorithms for feedback cancellation shows slow convergence that limits its application in real-time. Deep learning models are used to improve the feedback echo cancellation efficiency. Although deep learning can improve the feedback cancellation performance, it has the drawback of many parameter tunings and computes. We propose a simplified time-domain long short-term memory (LSTM) based deep learning model for acoustic feedback cancellation under single talk, double talk, double talk with and without noise scenarios with less computational complexity and tuning parameters. The proposed model uses 1D-Convolution layers as the first layer without any frequency-domain transformation. The output of the convolution layer is the input to the LSTM. The model is trained on 15 hours of synthetic echo scenarios data of Microsoft AEC-challenge 2021. The simulation results show the effectiveness of the simplified LSTM model compared to the baseline LSTM model and dual-signal transformation LSTM network (DTLN aec) model for acoustic echo cancellation.