본문 바로가기

Make

라즈베리파이 프로젝트 음성인식 AI 구글 어시스턴트 스피커 만들기

라즈베리파이 프로젝트 음성인식 AI 스피커 만들기

  안녕하세요!! 이번엔 AI 스피커 입니다.!! 이것도 어떤 학생분이 만드신 아이디어를 참고했는데.. 아무리 찾아도 보이지가않네요 ㅠㅠ 혹시 제 글을 보다가 매우 유사하시다면 꼭 어떤 글인지 말씀해주세요. 출처를 기재하겠습니다.

 

 제가 그 분한테 참고한 것은 바로바로!!

출처: 다이소 공식 온라인 쇼핑몰

이 1,000원짜리 저금통인데요!! 요 녀석에다가 각종 기기를 넣어서 깔끔하게 만든 게 정말 보기 좋더라구요.

그럼 바로 제작 과정을 보여드릴게요.

 먼저 다이소에서 구매한 깡통을 땄습니다!! 정말.. 음료수 캔 따듯이 간단하게 따지더라구요. 그리고 마이크가 들어갈 구멍과 RGB LED가 들어갈 구멍을 뚫어주었습니다. 철이여서 .. 뒤에 보이시는 거 처럼 전동드릴로 뚫었어요..

 마이크는 요로코롬 보이시는 거와 같이 다이소에서 파는 저렴한 USB형을 구매한 후 박살을 내줬습니다!! 

스피커도 또한 USB형을 구매한 후 똑같이 혼줄을 .. 내줬습니다!! 

 이 후 필요한 만큼 선을 자르고 납땜기로 선을 이은 후 절연테이프로 꼼꼼히 감아주었습니다

 그리고 요리조리 라즈베리파이와 함께 집어넣어줍니다!! AI스피커는 GPIO를 사용할 일이 별로 없어서 선이 없어 간단해서너무 좋네용 허허

짜잔 깔끔하죠!? 정말 맨날 땜질하고 ..테이프 만지고 하니까 .. 아주 이상한 곳에서 실력이 느네요. 만들기가 그렇죠 뭐ㅎㅎ

 

 하지만.. 정말 코딩하는 과정이 엄청났습니다. 제가 사용한 라이브러리는 음성인식 부분에 구글 어시스턴트와 호출부에 SNOWBOY입니다. 먼저 구글 어시스턴트를 사용하기 위해서는 홈페이지에 가서 정말 하라는 걸 다 따라합니다. 각종 오류에 대해선 하나하나 말해드리기.. 너무 많아서 .. 하하 일단 생략하겠습니다.

 

 만약 git에서 구글 코드를 무사히 받았다면 메인 코드 중간에 음성을 받아들이는 if 문의 구간이 있습니다. 저는 그곳 첫번째에 sys 기능으로 스노우보이 코드를 넣었습니다. snowboy모듈은 snowboy라고 호출을 하였을 때 trigger가 발생하여 코드가 진행되게 됩니다. 그럼 이후에는 구글 어시스턴트 기능을 통해 다양한 기능을 구현할 수 있는데요.

 

 저는 날씨 알려줘 라고 말하면 웹 크롤링을 통해 얻은 날씨 데이터를 통해 날씨를 알려주고, 저장한 노래를 틀거나 코로나 현황 등 제가 많이 쓸 것 같은 기능을 넣어보았어요!! 힘들었지만 정말 재밌고 이것만큼 유용하고 성취있는 작품도 없던 것 같습니다. ㅎㅎ

 

처음에 call_snowboy로 트리거가 발생하면 rgb led가 이쁘게 빛나다가 main문인 pushtotalk.py를 실행시킵니다.

본 bash를 라즈베리파이 자동실행 구문에 넣어서 수행하였구요~~

 

> run.sh 파일

 

#!/bin/bash

python3 /home/pi/ai_speaker/call_snowboy.py
python3 /home/pi/ai_speaker/pushtotalk.py

> call_snowboy.py 파일

 

import RPi.GPIO as GPIO
import time
import sys
import os
import snowboydecoder
import pygame
import speech_recognition as sr
from gtts import gTTS

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)

RUNNING = True
red = 13
green = 19
blue = 26

GPIO.setup(red,GPIO.OUT)
GPIO.setup(green,GPIO.OUT)
GPIO.setup(blue,GPIO.OUT)

Freq = 100

RED = GPIO.PWM(red,Freq)
GREEN = GPIO.PWM(green,Freq)
BLUE = GPIO.PWM(blue,Freq)
 



def detected_callback():

    RED.start(100)
    GREEN.start(0)
    BLUE.start(100)
    for x in range(1,101):
        RED.ChangeDutyCycle(100-x)
        time.sleep(0.015)
    for x in range(1,101):
        GREEN.ChangeDutyCycle(100-x)
        BLUE.ChangeDutyCycle(100-x)
        time.sleep(0.015)
    
    print("Hotword Detected")
    os.system("mpc pause")
    sys.exit()


if __name__ == '__main__':

    detector = snowboydecoder.HotwordDetector("/home/pi/ai_speaker/resources/models/snowboy.umdl", sensitivity=6, audio_gain=6)

    detector.start(detected_callback)

> pushtotalk.py 파일

 

# Copyright (C) 2017 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import RPi.GPIO as GPIO   # GPIO에 연결된 LED 초기화 위한 RPi.GPIO 모듈 추가
import playsound  # reply.wav 파일 재생위한 playsound 모듈 추가
import subprocess
import pygame

import concurrent.futures
import json
import logging
import os
import os.path
import pathlib2 as pathlib
import sys
import time
import uuid

import click
import grpc
import google.auth.transport.grpc
import google.auth.transport.requests
import google.oauth2.credentials

from google.assistant.embedded.v1alpha2 import (
    embedded_assistant_pb2,
    embedded_assistant_pb2_grpc
)
from tenacity import retry, stop_after_attempt, retry_if_exception

try:
    from . import (
        assistant_helpers,
        audio_helpers,
        browser_helpers,
        device_helpers
    )
except (SystemError, ImportError):
    import assistant_helpers
    import audio_helpers
    import browser_helpers
    import device_helpers

pygame.mixer.init()
hello = pygame.mixer.Sound("ding.wav")

ASSISTANT_API_ENDPOINT = 'embeddedassistant.googleapis.com'
END_OF_UTTERANCE = embedded_assistant_pb2.AssistResponse.END_OF_UTTERANCE
DIALOG_FOLLOW_ON = embedded_assistant_pb2.DialogStateOut.DIALOG_FOLLOW_ON
CLOSE_MICROPHONE = embedded_assistant_pb2.DialogStateOut.CLOSE_MICROPHONE
PLAYING = embedded_assistant_pb2.ScreenOutConfig.PLAYING
DEFAULT_GRPC_DEADLINE = 60 * 3 + 5



class SampleAssistant(object):
    """Sample Assistant that supports conversations and device actions.

    Args:
      device_model_id: identifier of the device model.
      device_id: identifier of the registered device instance.
      conversation_stream(ConversationStream): audio stream
        for recording query and playing back assistant answer.
      channel: authorized gRPC channel for connection to the
        Google Assistant API.
      deadline_sec: gRPC deadline in seconds for Google Assistant API call.
      device_handler: callback for device actions.
    """

    def __init__(self, language_code, device_model_id, device_id,
                 conversation_stream, display,
                 channel, deadline_sec, device_handler):
        self.language_code = language_code
        self.device_model_id = device_model_id
        self.device_id = device_id
        self.conversation_stream = conversation_stream
        self.display = display

        # Opaque blob provided in AssistResponse that,
        # when provided in a follow-up AssistRequest,
        # gives the Assistant a context marker within the current state
        # of the multi-Assist()-RPC "conversation".
        # This value, along with MicrophoneMode, supports a more natural
        # "conversation" with the Assistant.
        self.conversation_state = None
        # Force reset of first conversation.
        self.is_new_conversation = True

        # Create Google Assistant API gRPC client.
        self.assistant = embedded_assistant_pb2_grpc.EmbeddedAssistantStub(
            channel
        )
        self.deadline = deadline_sec

        self.device_handler = device_handler

    def __enter__(self):
        return self

    def __exit__(self, etype, e, traceback):
        if e:
            return False
        self.conversation_stream.close()

    def is_grpc_error_unavailable(e):
        is_grpc_error = isinstance(e, grpc.RpcError)
        if is_grpc_error and (e.code() == grpc.StatusCode.UNAVAILABLE):
            logging.error('grpc unavailable error: %s', e)
            return True
        return False

    @retry(reraise=True, stop=stop_after_attempt(3),
           retry=retry_if_exception(is_grpc_error_unavailable))
    def assist(self):
        """Send a voice request to the Assistant and playback the response.

        Returns: True if conversation should continue.
        """
        continue_conversation = False
        device_actions_futures = []

        self.conversation_stream.start_recording()
        logging.info('Recording audio request.')
        hello.play() # snowboy use signal --jb

        def iter_log_assist_requests():
            for c in self.gen_assist_requests():
                assistant_helpers.log_assist_request_without_audio(c)
                yield c
            logging.debug('Reached end of AssistRequest iteration.')

        # This generator yields AssistResponse proto messages
        # received from the gRPC Google Assistant API.
        for resp in self.assistant.Assist(iter_log_assist_requests(),
                                          self.deadline):
            assistant_helpers.log_assist_response_without_audio(resp)
            if resp.event_type == END_OF_UTTERANCE:
                logging.info('End of audio request detected.')
                logging.info('Stopping recording.')
                self.conversation_stream.stop_recording()
            if resp.speech_results:
                logging.info('Transcript of user request: "%s".',
                             ' '.join(r.transcript
                                      for r in resp.speech_results))
            
                text=' '.join(r.transcript for r in resp.speech_results)
                if '음악' in text or '노래' in text:
                    if '재생' in text or '켜' in text:
                        print("music on")
                        subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","start"])
                        sys.exit()
                    elif '정지' in text or '꺼' in text:
                        print("stop")
                        subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","stop"])
                        sys.exit()
                    elif '다음' in text:
                        print("Next!!")
                        subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","next"])
                        sys.exit()
                    elif '올려' in text or '높여' text:
                        print("Volume Up!!")
                        subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","up"])
                        sys.exit()
                    elif '내려' in text:
                        print("Volume Down!!")
                        subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","down"])
                        sys.exit()
                if '마스크' in text:           
                    print("MASK")
                    subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","MASK"])
                    sys.exit()
                if '우산' in text:           
                    print("RAIN")
                    subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","RAIN"])
                    sys.exit()
                if '온도' in text:           
                    print("TEMP")
                    subprocess.Popen(["nohup", "python3", "/home/pi/ai_speaker/run_mpd.py","TEMP"])
                    sys.exit()
                        
            if len(resp.audio_out.audio_data) > 0:                                
                if not self.conversation_stream.playing:
                    self.conversation_stream.stop_recording()
                    self.conversation_stream.start_playback()
                    logging.info('Playing assistant response.')
                self.conversation_stream.write(resp.audio_out.audio_data)
            if resp.dialog_state_out.conversation_state:
                conversation_state = resp.dialog_state_out.conversation_state
                logging.debug('Updating conversation state.')
                self.conversation_state = conversation_state
            if resp.dialog_state_out.volume_percentage != 0:
                volume_percentage = resp.dialog_state_out.volume_percentage
                logging.info('Setting volume to %s%%', volume_percentage)
                self.conversation_stream.volume_percentage = volume_percentage
            if resp.dialog_state_out.microphone_mode == DIALOG_FOLLOW_ON:
                continue_conversation = True
                logging.info('Expecting follow-on query from user.')
            elif resp.dialog_state_out.microphone_mode == CLOSE_MICROPHONE:
                continue_conversation = False
            if resp.device_action.device_request_json:
                device_request = json.loads(
                    resp.device_action.device_request_json
                )
                fs = self.device_handler(device_request)
                if fs:
                    device_actions_futures.extend(fs)
            if self.display and resp.screen_out.data:
                system_browser = browser_helpers.system_browser
                system_browser.display(resp.screen_out.data)

        if len(device_actions_futures):
            logging.info('Waiting for device executions to complete.')
            concurrent.futures.wait(device_actions_futures)

        logging.info('Finished playing assistant response.')
        self.conversation_stream.stop_playback()
        return continue_conversation

    def gen_assist_requests(self):
        """Yields: AssistRequest messages to send to the API."""

        config = embedded_assistant_pb2.AssistConfig(
            audio_in_config=embedded_assistant_pb2.AudioInConfig(
                encoding='LINEAR16',
                sample_rate_hertz=self.conversation_stream.sample_rate,
            ),
            audio_out_config=embedded_assistant_pb2.AudioOutConfig(
                encoding='LINEAR16',
                sample_rate_hertz=self.conversation_stream.sample_rate,
                volume_percentage=self.conversation_stream.volume_percentage,
            ),
            dialog_state_in=embedded_assistant_pb2.DialogStateIn(
                language_code=self.language_code,
                conversation_state=self.conversation_state,
                is_new_conversation=self.is_new_conversation,
            ),
            device_config=embedded_assistant_pb2.DeviceConfig(
                device_id=self.device_id,
                device_model_id=self.device_model_id,
            )
        )
        if self.display:
            config.screen_out_config.screen_mode = PLAYING
        # Continue current conversation with later requests.
        self.is_new_conversation = False
        # The first AssistRequest must contain the AssistConfig
        # and no audio data.
        yield embedded_assistant_pb2.AssistRequest(config=config)
        for data in self.conversation_stream:
            # Subsequent requests need audio data, but not config.
            yield embedded_assistant_pb2.AssistRequest(audio_in=data)


@click.command()
@click.option('--api-endpoint', default=ASSISTANT_API_ENDPOINT,
              metavar='<api endpoint>', show_default=True,
              help='Address of Google Assistant API service.')
@click.option('--credentials',
              metavar='<credentials>', show_default=True,
              default=os.path.join(click.get_app_dir('google-oauthlib-tool'),
                                   'credentials.json'),
              help='Path to read OAuth2 credentials.')
@click.option('--project-id',
              metavar='<project id>',
              help=('Google Developer Project ID used for registration '
                    'if --device-id is not specified'))
@click.option('--device-model-id',
              metavar='<device model id>',
              help=(('Unique device model identifier, '
                     'if not specifed, it is read from --device-config')))
@click.option('--device-id',
              metavar='<device id>',
              help=(('Unique registered device instance identifier, '
                     'if not specified, it is read from --device-config, '
                     'if no device_config found: a new device is registered '
                     'using a unique id and a new device config is saved')))
@click.option('--device-config', show_default=True,
              metavar='<device config>',
              default=os.path.join(
                  click.get_app_dir('googlesamples-assistant'),
                  'device_config.json'),
              help='Path to save and restore the device configuration')
@click.option('--lang', show_default=True,
              metavar='<language code>',
              default='en-US',
              help='Language code of the Assistant')
@click.option('--display', is_flag=True, default=False,
              help='Enable visual display of Assistant responses in HTML.')
@click.option('--verbose', '-v', is_flag=True, default=False,
              help='Verbose logging.')
@click.option('--input-audio-file', '-i',
              metavar='<input file>',
              help='Path to input audio file. '
              'If missing, uses audio capture')
@click.option('--output-audio-file', '-o',
              metavar='<output file>',
              help='Path to output audio file. '
              'If missing, uses audio playback')
@click.option('--audio-sample-rate',
              default=audio_helpers.DEFAULT_AUDIO_SAMPLE_RATE,
              metavar='<audio sample rate>', show_default=True,
              help='Audio sample rate in hertz.')
@click.option('--audio-sample-width',
              default=audio_helpers.DEFAULT_AUDIO_SAMPLE_WIDTH,
              metavar='<audio sample width>', show_default=True,
              help='Audio sample width in bytes.')
@click.option('--audio-iter-size',
              default=audio_helpers.DEFAULT_AUDIO_ITER_SIZE,
              metavar='<audio iter size>', show_default=True,
              help='Size of each read during audio stream iteration in bytes.')
@click.option('--audio-block-size',
              default=audio_helpers.DEFAULT_AUDIO_DEVICE_BLOCK_SIZE,
              metavar='<audio block size>', show_default=True,
              help=('Block size in bytes for each audio device '
                    'read and write operation.'))
@click.option('--audio-flush-size',
              default=audio_helpers.DEFAULT_AUDIO_DEVICE_FLUSH_SIZE,
              metavar='<audio flush size>', show_default=True,
              help=('Size of silence data in bytes written '
                    'during flush operation'))
@click.option('--grpc-deadline', default=DEFAULT_GRPC_DEADLINE,
              metavar='<grpc deadline>', show_default=True,
              help='gRPC deadline in seconds')
@click.option('--once', default=False, is_flag=True,
              help='Force termination after a single conversation.')
def main(api_endpoint, credentials, project_id,
         device_model_id, device_id, device_config,
         lang, display, verbose,
         input_audio_file, output_audio_file,
         audio_sample_rate, audio_sample_width,
         audio_iter_size, audio_block_size, audio_flush_size,
         grpc_deadline, once, *args, **kwargs):
    """Samples for the Google Assistant API.

    Examples:
      Run the sample with microphone input and speaker output:

        $ python -m googlesamples.assistant

      Run the sample with file input and speaker output:

        $ python -m googlesamples.assistant -i <input file>

      Run the sample with file input and output:

        $ python -m googlesamples.assistant -i <input file> -o <output file>
    """
    # Setup logging.
    logging.basicConfig(level=logging.DEBUG if verbose else logging.INFO)

    # Load OAuth 2.0 credentials.
    try:
        with open(credentials, 'r') as f:
            credentials = google.oauth2.credentials.Credentials(token=None,
                                                                **json.load(f))
            http_request = google.auth.transport.requests.Request()
            credentials.refresh(http_request)
    except Exception as e:
        logging.error('Error loading credentials: %s', e)
        logging.error('Run google-oauthlib-tool to initialize '
                      'new OAuth 2.0 credentials.')
        sys.exit(-1)

    # Create an authorized gRPC channel.
    grpc_channel = google.auth.transport.grpc.secure_authorized_channel(
        credentials, http_request, api_endpoint)
    logging.info('Connecting to %s', api_endpoint)

    # Configure audio source and sink.
    audio_device = None
    if input_audio_file:
        audio_source = audio_helpers.WaveSource(
            open(input_audio_file, 'rb'),
            sample_rate=audio_sample_rate,
            sample_width=audio_sample_width
        )
    else:
        audio_source = audio_device = (
            audio_device or audio_helpers.SoundDeviceStream(
                sample_rate=audio_sample_rate,
                sample_width=audio_sample_width,
                block_size=audio_block_size,
                flush_size=audio_flush_size
            )
        )
    if output_audio_file:
        audio_sink = audio_helpers.WaveSink(
            open(output_audio_file, 'wb'),
            sample_rate=audio_sample_rate,
            sample_width=audio_sample_width
        )
    else:
        audio_sink = audio_device = (
            audio_device or audio_helpers.SoundDeviceStream(
                sample_rate=audio_sample_rate,
                sample_width=audio_sample_width,
                block_size=audio_block_size,
                flush_size=audio_flush_size
            )
        )
    # Create conversation stream with the given audio source and sink.
    conversation_stream = audio_helpers.ConversationStream(
        source=audio_source,
        sink=audio_sink,
        iter_size=audio_iter_size,
        sample_width=audio_sample_width,
    )

    if not device_id or not device_model_id:
        try:
            with open(device_config) as f:
                device = json.load(f)
                device_id = device['id']
                device_model_id = device['model_id']
                logging.info("Using device model %s and device id %s",
                             device_model_id,
                             device_id)
        except Exception as e:
            logging.warning('Device config not found: %s' % e)
            logging.info('Registering device')
            if not device_model_id:
                logging.error('Option --device-model-id required '
                              'when registering a device instance.')
                sys.exit(-1)
            if not project_id:
                logging.error('Option --project-id required '
                              'when registering a device instance.')
                sys.exit(-1)
            device_base_url = (
                'https://%s/v1alpha2/projects/%s/devices' % (api_endpoint,
                                                             project_id)
            )
            device_id = str(uuid.uuid1())
            payload = {
                'id': device_id,
                'model_id': device_model_id,
                'client_type': 'SDK_SERVICE'
            }
            session = google.auth.transport.requests.AuthorizedSession(
                credentials
            )
            r = session.post(device_base_url, data=json.dumps(payload))
            if r.status_code != 200:
                logging.error('Failed to register device: %s', r.text)
                sys.exit(-1)
            logging.info('Device registered: %s', device_id)
            pathlib.Path(os.path.dirname(device_config)).mkdir(exist_ok=True)
            with open(device_config, 'w') as f:
                json.dump(payload, f)

    device_handler = device_helpers.DeviceRequestHandler(device_id)

    @device_handler.command('action.devices.commands.OnOff')
    def onoff(on):
        if on:
            logging.info('Turning device on')
        else:
            logging.info('Turning device off')

    @device_handler.command('com.example.commands.BlinkLight')
    def blink(speed, number):
        logging.info('Blinking device %s times.' % number)
        delay = 1
        if speed == "SLOWLY":
            delay = 2
        elif speed == "QUICKLY":
            delay = 0.5
        for i in range(int(number)):
            logging.info('Device is blinking.')
            time.sleep(delay)

    with SampleAssistant(lang, device_model_id, device_id,
                         conversation_stream, display,
                         grpc_channel, grpc_deadline,
                         device_handler) as assistant:
        # If file arguments are supplied:
        # exit after the first turn of the conversation.
        if input_audio_file or output_audio_file:
            assistant.assist()
            return

        # If no file arguments supplied:
        # keep recording voice requests using the microphone
        # and playing back assistant response using the speaker.
        # When the once flag is set, don't wait for a trigger. Otherwise, wait.
        wait_for_user_trigger = not once
        continue_conversation = assistant.assist()
        # wait for user trigger if there is no follow-up turn in
        # the conversation.
        wait_for_user_trigger = not continue_conversation


if __name__ == '__main__':
    main()

최종 구현 영상입니다. !! 이쁘죠 정말 ㅠㅠ 여러분들에게도 이 프로젝트 적극 추천합니다!! 

 

혹시 도움이 되셨다면 공감, 댓글 부탁드립니다 안녕~~